00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2431 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3692 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.871 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.882 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.893 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.893 > git config core.sparsecheckout # timeout=10 00:00:04.904 > git read-tree -mu HEAD # timeout=10 00:00:04.918 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.942 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.942 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.038 [Pipeline] Start of Pipeline 00:00:05.050 [Pipeline] library 00:00:05.052 Loading library shm_lib@master 00:00:05.052 Library shm_lib@master is cached. Copying from home. 00:00:05.066 [Pipeline] node 00:00:05.077 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.079 [Pipeline] { 00:00:05.089 [Pipeline] catchError 00:00:05.091 [Pipeline] { 00:00:05.103 [Pipeline] wrap 00:00:05.111 [Pipeline] { 00:00:05.118 [Pipeline] stage 00:00:05.120 [Pipeline] { (Prologue) 00:00:05.315 [Pipeline] sh 00:00:05.595 + logger -p user.info -t JENKINS-CI 00:00:05.615 [Pipeline] echo 00:00:05.616 Node: WFP21 00:00:05.624 [Pipeline] sh 00:00:05.918 [Pipeline] setCustomBuildProperty 00:00:05.929 [Pipeline] echo 00:00:05.930 Cleanup processes 00:00:05.935 [Pipeline] sh 00:00:06.214 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.214 3472839 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.225 [Pipeline] sh 00:00:06.505 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.505 ++ grep -v 'sudo pgrep' 00:00:06.505 ++ awk '{print $1}' 00:00:06.505 + sudo kill -9 00:00:06.505 + true 00:00:06.517 [Pipeline] cleanWs 00:00:06.524 [WS-CLEANUP] Deleting project workspace... 00:00:06.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.529 [WS-CLEANUP] done 00:00:06.533 [Pipeline] setCustomBuildProperty 00:00:06.544 [Pipeline] sh 00:00:06.820 + sudo git config --global --replace-all safe.directory '*' 00:00:06.894 [Pipeline] httpRequest 00:00:07.294 [Pipeline] echo 00:00:07.295 Sorcerer 10.211.164.20 is alive 00:00:07.303 [Pipeline] retry 00:00:07.305 [Pipeline] { 00:00:07.315 [Pipeline] httpRequest 00:00:07.319 HttpMethod: GET 00:00:07.319 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.319 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.321 Response Code: HTTP/1.1 200 OK 00:00:07.322 Success: Status code 200 is in the accepted range: 200,404 00:00:07.322 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.346 [Pipeline] } 00:00:08.363 [Pipeline] // retry 00:00:08.370 [Pipeline] sh 00:00:08.651 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.666 [Pipeline] httpRequest 00:00:09.027 [Pipeline] echo 00:00:09.029 Sorcerer 10.211.164.20 is alive 00:00:09.038 [Pipeline] retry 00:00:09.040 [Pipeline] { 00:00:09.053 [Pipeline] httpRequest 00:00:09.057 HttpMethod: GET 00:00:09.058 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.058 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.078 Response Code: HTTP/1.1 200 OK 00:00:09.079 Success: Status code 200 is in the accepted range: 200,404 00:00:09.079 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:19.717 [Pipeline] } 00:01:19.734 [Pipeline] // retry 00:01:19.741 [Pipeline] sh 00:01:20.020 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:22.558 [Pipeline] sh 00:01:22.838 + git -C spdk log --oneline -n5 00:01:22.838 c13c99a5e test: Various fixes for Fedora40 00:01:22.838 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:22.838 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:22.838 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:22.838 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:22.849 [Pipeline] } 00:01:22.862 [Pipeline] // stage 00:01:22.872 [Pipeline] stage 00:01:22.874 [Pipeline] { (Prepare) 00:01:22.890 [Pipeline] writeFile 00:01:22.907 [Pipeline] sh 00:01:23.189 + logger -p user.info -t JENKINS-CI 00:01:23.202 [Pipeline] sh 00:01:23.482 + logger -p user.info -t JENKINS-CI 00:01:23.494 [Pipeline] sh 00:01:23.774 + cat autorun-spdk.conf 00:01:23.774 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.774 SPDK_TEST_NVMF=1 00:01:23.774 SPDK_TEST_NVME_CLI=1 00:01:23.774 SPDK_TEST_NVMF_NICS=mlx5 00:01:23.774 SPDK_RUN_UBSAN=1 00:01:23.774 NET_TYPE=phy 00:01:23.781 RUN_NIGHTLY=1 00:01:23.786 [Pipeline] readFile 00:01:23.811 [Pipeline] withEnv 00:01:23.813 [Pipeline] { 00:01:23.826 [Pipeline] sh 00:01:24.108 + set -ex 00:01:24.108 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:24.108 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:24.108 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.108 ++ SPDK_TEST_NVMF=1 00:01:24.108 ++ SPDK_TEST_NVME_CLI=1 00:01:24.108 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:24.108 ++ SPDK_RUN_UBSAN=1 00:01:24.108 ++ NET_TYPE=phy 00:01:24.108 ++ RUN_NIGHTLY=1 00:01:24.108 + case $SPDK_TEST_NVMF_NICS in 00:01:24.108 + DRIVERS=mlx5_ib 00:01:24.108 + [[ -n mlx5_ib ]] 00:01:24.108 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.108 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.692 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.692 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.692 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.692 + true 00:01:30.692 + for D in $DRIVERS 00:01:30.692 + sudo modprobe mlx5_ib 00:01:30.692 + exit 0 00:01:30.701 [Pipeline] } 00:01:30.716 [Pipeline] // withEnv 00:01:30.721 [Pipeline] } 00:01:30.733 [Pipeline] // stage 00:01:30.741 [Pipeline] catchError 00:01:30.742 [Pipeline] { 00:01:30.754 [Pipeline] timeout 00:01:30.754 Timeout set to expire in 1 hr 0 min 00:01:30.756 [Pipeline] { 00:01:30.781 [Pipeline] stage 00:01:30.783 [Pipeline] { (Tests) 00:01:30.799 [Pipeline] sh 00:01:31.084 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:31.084 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:31.084 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:31.084 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:31.084 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:31.084 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:31.084 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:31.084 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:31.084 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:31.084 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:31.084 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:31.084 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:31.084 + source /etc/os-release 00:01:31.084 ++ NAME='Fedora Linux' 00:01:31.084 ++ VERSION='39 (Cloud Edition)' 00:01:31.084 ++ ID=fedora 00:01:31.084 ++ VERSION_ID=39 00:01:31.084 ++ VERSION_CODENAME= 00:01:31.084 ++ PLATFORM_ID=platform:f39 00:01:31.084 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:31.084 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.084 ++ LOGO=fedora-logo-icon 00:01:31.084 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:31.084 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.084 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:31.084 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.084 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.084 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.084 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:31.084 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.084 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:31.084 ++ SUPPORT_END=2024-11-12 00:01:31.084 ++ VARIANT='Cloud Edition' 00:01:31.084 ++ VARIANT_ID=cloud 00:01:31.084 + uname -a 00:01:31.084 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:31.084 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:34.376 Hugepages 00:01:34.376 node hugesize free / total 00:01:34.376 node0 1048576kB 0 / 0 00:01:34.376 node0 2048kB 0 / 0 00:01:34.376 node1 1048576kB 0 / 0 00:01:34.376 node1 2048kB 0 / 0 00:01:34.376 00:01:34.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.376 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:34.376 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:34.376 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:34.376 + rm -f /tmp/spdk-ld-path 00:01:34.376 + source autorun-spdk.conf 00:01:34.376 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.376 ++ SPDK_TEST_NVMF=1 00:01:34.376 ++ SPDK_TEST_NVME_CLI=1 00:01:34.376 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:34.376 ++ SPDK_RUN_UBSAN=1 00:01:34.376 ++ NET_TYPE=phy 00:01:34.376 ++ RUN_NIGHTLY=1 00:01:34.376 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.376 + [[ -n '' ]] 00:01:34.376 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:34.376 + for M in /var/spdk/build-*-manifest.txt 00:01:34.376 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:34.376 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:34.376 + for M in /var/spdk/build-*-manifest.txt 00:01:34.376 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.376 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:34.376 + for M in /var/spdk/build-*-manifest.txt 00:01:34.376 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.376 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:34.376 ++ uname 00:01:34.376 + [[ Linux == \L\i\n\u\x ]] 00:01:34.376 + sudo dmesg -T 00:01:34.376 + sudo dmesg --clear 00:01:34.376 + dmesg_pid=3473751 00:01:34.376 + [[ Fedora Linux == FreeBSD ]] 00:01:34.376 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.376 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.376 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:34.376 + [[ -x /usr/src/fio-static/fio ]] 00:01:34.376 + export FIO_BIN=/usr/src/fio-static/fio 00:01:34.376 + FIO_BIN=/usr/src/fio-static/fio 00:01:34.376 + sudo dmesg -Tw 00:01:34.377 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:34.377 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:34.377 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:34.377 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.377 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.377 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:34.377 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.377 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.377 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:34.377 Test configuration: 00:01:34.377 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.377 SPDK_TEST_NVMF=1 00:01:34.377 SPDK_TEST_NVME_CLI=1 00:01:34.377 SPDK_TEST_NVMF_NICS=mlx5 00:01:34.377 SPDK_RUN_UBSAN=1 00:01:34.377 NET_TYPE=phy 00:01:34.377 RUN_NIGHTLY=1 11:29:04 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:34.377 11:29:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:34.377 11:29:04 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:34.377 11:29:04 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:34.377 11:29:04 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:34.377 11:29:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.377 11:29:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.377 11:29:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.377 11:29:04 -- paths/export.sh@5 -- $ export PATH 00:01:34.377 11:29:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.377 11:29:04 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:34.377 11:29:04 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:34.377 11:29:04 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733221744.XXXXXX 00:01:34.377 11:29:04 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733221744.2KAgMp 00:01:34.377 11:29:04 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:34.377 11:29:04 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:34.377 11:29:04 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:34.377 11:29:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:34.377 11:29:04 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:34.377 11:29:04 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:34.377 11:29:04 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:34.377 11:29:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.377 11:29:04 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:34.377 11:29:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.377 11:29:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.377 11:29:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:34.377 11:29:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.377 Tue Dec 3 10:29:04 AM UTC 2024 00:01:34.377 11:29:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.377 LTS-67-gc13c99a5e 00:01:34.377 11:29:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.377 11:29:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.377 11:29:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.377 11:29:04 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:34.377 11:29:04 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:34.377 11:29:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.377 ************************************ 00:01:34.377 START TEST ubsan 00:01:34.377 ************************************ 00:01:34.377 11:29:04 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:34.377 using ubsan 00:01:34.377 00:01:34.377 real 0m0.000s 00:01:34.377 user 0m0.000s 00:01:34.377 sys 0m0.000s 00:01:34.377 11:29:04 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:34.377 11:29:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.377 ************************************ 00:01:34.377 END TEST ubsan 00:01:34.377 ************************************ 00:01:34.636 11:29:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.636 11:29:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.636 11:29:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:34.636 11:29:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:34.636 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:34.636 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:34.895 Using 'verbs' RDMA provider 00:01:50.404 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:02.606 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:02.606 Creating mk/config.mk...done. 00:02:02.606 Creating mk/cc.flags.mk...done. 00:02:02.606 Type 'make' to build. 00:02:02.606 11:29:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:02.606 11:29:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:02.606 11:29:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:02.606 11:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.606 ************************************ 00:02:02.606 START TEST make 00:02:02.606 ************************************ 00:02:02.606 11:29:32 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:02.606 make[1]: Nothing to be done for 'all'. 00:02:10.728 The Meson build system 00:02:10.728 Version: 1.5.0 00:02:10.728 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:10.728 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:10.728 Build type: native build 00:02:10.728 Program cat found: YES (/usr/bin/cat) 00:02:10.729 Project name: DPDK 00:02:10.729 Project version: 23.11.0 00:02:10.729 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:10.729 C linker for the host machine: cc ld.bfd 2.40-14 00:02:10.729 Host machine cpu family: x86_64 00:02:10.729 Host machine cpu: x86_64 00:02:10.729 Message: ## Building in Developer Mode ## 00:02:10.729 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.729 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:10.729 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.729 Program python3 found: YES (/usr/bin/python3) 00:02:10.729 Program cat found: YES (/usr/bin/cat) 00:02:10.729 Compiler for C supports arguments -march=native: YES 00:02:10.729 Checking for size of "void *" : 8 00:02:10.729 Checking for size of "void *" : 8 (cached) 00:02:10.729 Library m found: YES 00:02:10.729 Library numa found: YES 00:02:10.729 Has header "numaif.h" : YES 00:02:10.729 Library fdt found: NO 00:02:10.729 Library execinfo found: NO 00:02:10.729 Has header "execinfo.h" : YES 00:02:10.729 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.729 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.729 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.729 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.729 Run-time dependency openssl found: YES 3.1.1 00:02:10.729 Run-time dependency libpcap found: YES 1.10.4 00:02:10.729 Has header "pcap.h" with dependency libpcap: YES 00:02:10.729 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.729 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.729 Compiler for C supports arguments -Wformat: YES 00:02:10.729 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.729 Compiler for C supports arguments -Wformat-security: NO 00:02:10.729 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.729 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.729 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.729 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.729 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.729 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.729 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.729 Compiler for C supports arguments -Wundef: YES 00:02:10.729 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.729 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.729 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.729 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.729 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.729 Program objdump found: YES (/usr/bin/objdump) 00:02:10.729 Compiler for C supports arguments -mavx512f: YES 00:02:10.729 Checking if "AVX512 checking" compiles: YES 00:02:10.729 Fetching value of define "__SSE4_2__" : 1 00:02:10.729 Fetching value of define "__AES__" : 1 00:02:10.729 Fetching value of define "__AVX__" : 1 00:02:10.729 Fetching value of define "__AVX2__" : 1 00:02:10.729 Fetching value of define "__AVX512BW__" : 1 00:02:10.729 Fetching value of define "__AVX512CD__" : 1 00:02:10.729 Fetching value of define "__AVX512DQ__" : 1 00:02:10.729 Fetching value of define "__AVX512F__" : 1 00:02:10.729 Fetching value of define "__AVX512VL__" : 1 00:02:10.729 Fetching value of define "__PCLMUL__" : 1 00:02:10.729 Fetching value of define "__RDRND__" : 1 00:02:10.729 Fetching value of define "__RDSEED__" : 1 00:02:10.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.729 Fetching value of define "__znver1__" : (undefined) 00:02:10.729 Fetching value of define "__znver2__" : (undefined) 00:02:10.729 Fetching value of define "__znver3__" : (undefined) 00:02:10.729 Fetching value of define "__znver4__" : (undefined) 00:02:10.729 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.729 Message: lib/log: Defining dependency "log" 00:02:10.729 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.729 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.729 Checking for function "getentropy" : NO 00:02:10.729 Message: lib/eal: Defining dependency "eal" 00:02:10.729 Message: lib/ring: Defining dependency "ring" 00:02:10.729 Message: lib/rcu: Defining dependency "rcu" 00:02:10.729 Message: lib/mempool: Defining dependency "mempool" 00:02:10.729 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.729 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:10.729 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:10.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:10.729 Compiler for C supports arguments -mpclmul: YES 00:02:10.729 Compiler for C supports arguments -maes: YES 00:02:10.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.729 Compiler for C supports arguments -mavx512bw: YES 00:02:10.729 Compiler for C supports arguments -mavx512dq: YES 00:02:10.729 Compiler for C supports arguments -mavx512vl: YES 00:02:10.729 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.729 Compiler for C supports arguments -mavx2: YES 00:02:10.729 Compiler for C supports arguments -mavx: YES 00:02:10.729 Message: lib/net: Defining dependency "net" 00:02:10.729 Message: lib/meter: Defining dependency "meter" 00:02:10.729 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.729 Message: lib/pci: Defining dependency "pci" 00:02:10.729 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.729 Message: lib/hash: Defining dependency "hash" 00:02:10.729 Message: lib/timer: Defining dependency "timer" 00:02:10.729 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.729 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.729 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.729 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.729 Message: lib/power: Defining dependency "power" 00:02:10.729 Message: lib/reorder: Defining dependency "reorder" 00:02:10.729 Message: lib/security: Defining dependency "security" 00:02:10.729 Has header "linux/userfaultfd.h" : YES 00:02:10.729 Has header "linux/vduse.h" : YES 00:02:10.729 Message: lib/vhost: Defining dependency "vhost" 00:02:10.729 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:10.729 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.729 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.729 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.729 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:10.729 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:10.729 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:10.729 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:10.729 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:10.729 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:10.729 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:10.729 Configuring doxy-api-html.conf using configuration 00:02:10.729 Configuring doxy-api-man.conf using configuration 00:02:10.729 Program mandb found: YES (/usr/bin/mandb) 00:02:10.729 Program sphinx-build found: NO 00:02:10.729 Configuring rte_build_config.h using configuration 00:02:10.729 Message: 00:02:10.730 ================= 00:02:10.730 Applications Enabled 00:02:10.730 ================= 00:02:10.730 00:02:10.730 apps: 00:02:10.730 00:02:10.730 00:02:10.730 Message: 00:02:10.730 ================= 00:02:10.730 Libraries Enabled 00:02:10.730 ================= 00:02:10.730 00:02:10.730 libs: 00:02:10.730 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:10.730 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:10.730 cryptodev, dmadev, power, reorder, security, vhost, 00:02:10.730 00:02:10.730 Message: 00:02:10.730 =============== 00:02:10.730 Drivers Enabled 00:02:10.730 =============== 00:02:10.730 00:02:10.730 common: 00:02:10.730 00:02:10.730 bus: 00:02:10.730 pci, vdev, 00:02:10.730 mempool: 00:02:10.730 ring, 00:02:10.730 dma: 00:02:10.730 00:02:10.730 net: 00:02:10.730 00:02:10.730 crypto: 00:02:10.730 00:02:10.730 compress: 00:02:10.730 00:02:10.730 vdpa: 00:02:10.730 00:02:10.730 00:02:10.730 Message: 00:02:10.730 ================= 00:02:10.730 Content Skipped 00:02:10.730 ================= 00:02:10.730 00:02:10.730 apps: 00:02:10.730 dumpcap: explicitly disabled via build config 00:02:10.730 graph: explicitly disabled via build config 00:02:10.730 pdump: explicitly disabled via build config 00:02:10.730 proc-info: explicitly disabled via build config 00:02:10.730 test-acl: explicitly disabled via build config 00:02:10.730 test-bbdev: explicitly disabled via build config 00:02:10.730 test-cmdline: explicitly disabled via build config 00:02:10.730 test-compress-perf: explicitly disabled via build config 00:02:10.730 test-crypto-perf: explicitly disabled via build config 00:02:10.730 test-dma-perf: explicitly disabled via build config 00:02:10.730 test-eventdev: explicitly disabled via build config 00:02:10.730 test-fib: explicitly disabled via build config 00:02:10.730 test-flow-perf: explicitly disabled via build config 00:02:10.730 test-gpudev: explicitly disabled via build config 00:02:10.730 test-mldev: explicitly disabled via build config 00:02:10.730 test-pipeline: explicitly disabled via build config 00:02:10.730 test-pmd: explicitly disabled via build config 00:02:10.730 test-regex: explicitly disabled via build config 00:02:10.730 test-sad: explicitly disabled via build config 00:02:10.730 test-security-perf: explicitly disabled via build config 00:02:10.730 00:02:10.730 libs: 00:02:10.730 metrics: explicitly disabled via build config 00:02:10.730 acl: explicitly disabled via build config 00:02:10.730 bbdev: explicitly disabled via build config 00:02:10.730 bitratestats: explicitly disabled via build config 00:02:10.730 bpf: explicitly disabled via build config 00:02:10.730 cfgfile: explicitly disabled via build config 00:02:10.730 distributor: explicitly disabled via build config 00:02:10.730 efd: explicitly disabled via build config 00:02:10.730 eventdev: explicitly disabled via build config 00:02:10.730 dispatcher: explicitly disabled via build config 00:02:10.730 gpudev: explicitly disabled via build config 00:02:10.730 gro: explicitly disabled via build config 00:02:10.730 gso: explicitly disabled via build config 00:02:10.730 ip_frag: explicitly disabled via build config 00:02:10.730 jobstats: explicitly disabled via build config 00:02:10.730 latencystats: explicitly disabled via build config 00:02:10.730 lpm: explicitly disabled via build config 00:02:10.730 member: explicitly disabled via build config 00:02:10.730 pcapng: explicitly disabled via build config 00:02:10.730 rawdev: explicitly disabled via build config 00:02:10.730 regexdev: explicitly disabled via build config 00:02:10.730 mldev: explicitly disabled via build config 00:02:10.730 rib: explicitly disabled via build config 00:02:10.730 sched: explicitly disabled via build config 00:02:10.730 stack: explicitly disabled via build config 00:02:10.730 ipsec: explicitly disabled via build config 00:02:10.730 pdcp: explicitly disabled via build config 00:02:10.730 fib: explicitly disabled via build config 00:02:10.730 port: explicitly disabled via build config 00:02:10.730 pdump: explicitly disabled via build config 00:02:10.730 table: explicitly disabled via build config 00:02:10.730 pipeline: explicitly disabled via build config 00:02:10.730 graph: explicitly disabled via build config 00:02:10.730 node: explicitly disabled via build config 00:02:10.730 00:02:10.730 drivers: 00:02:10.730 common/cpt: not in enabled drivers build config 00:02:10.730 common/dpaax: not in enabled drivers build config 00:02:10.730 common/iavf: not in enabled drivers build config 00:02:10.730 common/idpf: not in enabled drivers build config 00:02:10.730 common/mvep: not in enabled drivers build config 00:02:10.730 common/octeontx: not in enabled drivers build config 00:02:10.730 bus/auxiliary: not in enabled drivers build config 00:02:10.730 bus/cdx: not in enabled drivers build config 00:02:10.730 bus/dpaa: not in enabled drivers build config 00:02:10.730 bus/fslmc: not in enabled drivers build config 00:02:10.730 bus/ifpga: not in enabled drivers build config 00:02:10.730 bus/platform: not in enabled drivers build config 00:02:10.730 bus/vmbus: not in enabled drivers build config 00:02:10.730 common/cnxk: not in enabled drivers build config 00:02:10.730 common/mlx5: not in enabled drivers build config 00:02:10.730 common/nfp: not in enabled drivers build config 00:02:10.730 common/qat: not in enabled drivers build config 00:02:10.730 common/sfc_efx: not in enabled drivers build config 00:02:10.730 mempool/bucket: not in enabled drivers build config 00:02:10.730 mempool/cnxk: not in enabled drivers build config 00:02:10.730 mempool/dpaa: not in enabled drivers build config 00:02:10.730 mempool/dpaa2: not in enabled drivers build config 00:02:10.730 mempool/octeontx: not in enabled drivers build config 00:02:10.730 mempool/stack: not in enabled drivers build config 00:02:10.730 dma/cnxk: not in enabled drivers build config 00:02:10.730 dma/dpaa: not in enabled drivers build config 00:02:10.730 dma/dpaa2: not in enabled drivers build config 00:02:10.730 dma/hisilicon: not in enabled drivers build config 00:02:10.730 dma/idxd: not in enabled drivers build config 00:02:10.730 dma/ioat: not in enabled drivers build config 00:02:10.730 dma/skeleton: not in enabled drivers build config 00:02:10.730 net/af_packet: not in enabled drivers build config 00:02:10.730 net/af_xdp: not in enabled drivers build config 00:02:10.730 net/ark: not in enabled drivers build config 00:02:10.730 net/atlantic: not in enabled drivers build config 00:02:10.730 net/avp: not in enabled drivers build config 00:02:10.730 net/axgbe: not in enabled drivers build config 00:02:10.730 net/bnx2x: not in enabled drivers build config 00:02:10.730 net/bnxt: not in enabled drivers build config 00:02:10.730 net/bonding: not in enabled drivers build config 00:02:10.730 net/cnxk: not in enabled drivers build config 00:02:10.730 net/cpfl: not in enabled drivers build config 00:02:10.730 net/cxgbe: not in enabled drivers build config 00:02:10.730 net/dpaa: not in enabled drivers build config 00:02:10.730 net/dpaa2: not in enabled drivers build config 00:02:10.730 net/e1000: not in enabled drivers build config 00:02:10.730 net/ena: not in enabled drivers build config 00:02:10.730 net/enetc: not in enabled drivers build config 00:02:10.730 net/enetfec: not in enabled drivers build config 00:02:10.730 net/enic: not in enabled drivers build config 00:02:10.731 net/failsafe: not in enabled drivers build config 00:02:10.731 net/fm10k: not in enabled drivers build config 00:02:10.731 net/gve: not in enabled drivers build config 00:02:10.731 net/hinic: not in enabled drivers build config 00:02:10.731 net/hns3: not in enabled drivers build config 00:02:10.731 net/i40e: not in enabled drivers build config 00:02:10.731 net/iavf: not in enabled drivers build config 00:02:10.731 net/ice: not in enabled drivers build config 00:02:10.731 net/idpf: not in enabled drivers build config 00:02:10.731 net/igc: not in enabled drivers build config 00:02:10.731 net/ionic: not in enabled drivers build config 00:02:10.731 net/ipn3ke: not in enabled drivers build config 00:02:10.731 net/ixgbe: not in enabled drivers build config 00:02:10.731 net/mana: not in enabled drivers build config 00:02:10.731 net/memif: not in enabled drivers build config 00:02:10.731 net/mlx4: not in enabled drivers build config 00:02:10.731 net/mlx5: not in enabled drivers build config 00:02:10.731 net/mvneta: not in enabled drivers build config 00:02:10.731 net/mvpp2: not in enabled drivers build config 00:02:10.731 net/netvsc: not in enabled drivers build config 00:02:10.731 net/nfb: not in enabled drivers build config 00:02:10.731 net/nfp: not in enabled drivers build config 00:02:10.731 net/ngbe: not in enabled drivers build config 00:02:10.731 net/null: not in enabled drivers build config 00:02:10.731 net/octeontx: not in enabled drivers build config 00:02:10.731 net/octeon_ep: not in enabled drivers build config 00:02:10.731 net/pcap: not in enabled drivers build config 00:02:10.731 net/pfe: not in enabled drivers build config 00:02:10.731 net/qede: not in enabled drivers build config 00:02:10.731 net/ring: not in enabled drivers build config 00:02:10.731 net/sfc: not in enabled drivers build config 00:02:10.731 net/softnic: not in enabled drivers build config 00:02:10.731 net/tap: not in enabled drivers build config 00:02:10.731 net/thunderx: not in enabled drivers build config 00:02:10.731 net/txgbe: not in enabled drivers build config 00:02:10.731 net/vdev_netvsc: not in enabled drivers build config 00:02:10.731 net/vhost: not in enabled drivers build config 00:02:10.731 net/virtio: not in enabled drivers build config 00:02:10.731 net/vmxnet3: not in enabled drivers build config 00:02:10.731 raw/*: missing internal dependency, "rawdev" 00:02:10.731 crypto/armv8: not in enabled drivers build config 00:02:10.731 crypto/bcmfs: not in enabled drivers build config 00:02:10.731 crypto/caam_jr: not in enabled drivers build config 00:02:10.731 crypto/ccp: not in enabled drivers build config 00:02:10.731 crypto/cnxk: not in enabled drivers build config 00:02:10.731 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.731 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.731 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.731 crypto/mlx5: not in enabled drivers build config 00:02:10.731 crypto/mvsam: not in enabled drivers build config 00:02:10.731 crypto/nitrox: not in enabled drivers build config 00:02:10.731 crypto/null: not in enabled drivers build config 00:02:10.731 crypto/octeontx: not in enabled drivers build config 00:02:10.731 crypto/openssl: not in enabled drivers build config 00:02:10.731 crypto/scheduler: not in enabled drivers build config 00:02:10.731 crypto/uadk: not in enabled drivers build config 00:02:10.731 crypto/virtio: not in enabled drivers build config 00:02:10.731 compress/isal: not in enabled drivers build config 00:02:10.731 compress/mlx5: not in enabled drivers build config 00:02:10.731 compress/octeontx: not in enabled drivers build config 00:02:10.731 compress/zlib: not in enabled drivers build config 00:02:10.731 regex/*: missing internal dependency, "regexdev" 00:02:10.731 ml/*: missing internal dependency, "mldev" 00:02:10.731 vdpa/ifc: not in enabled drivers build config 00:02:10.731 vdpa/mlx5: not in enabled drivers build config 00:02:10.731 vdpa/nfp: not in enabled drivers build config 00:02:10.731 vdpa/sfc: not in enabled drivers build config 00:02:10.731 event/*: missing internal dependency, "eventdev" 00:02:10.731 baseband/*: missing internal dependency, "bbdev" 00:02:10.731 gpu/*: missing internal dependency, "gpudev" 00:02:10.731 00:02:10.731 00:02:10.731 Build targets in project: 85 00:02:10.731 00:02:10.731 DPDK 23.11.0 00:02:10.731 00:02:10.731 User defined options 00:02:10.731 buildtype : debug 00:02:10.731 default_library : shared 00:02:10.731 libdir : lib 00:02:10.731 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:10.731 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:10.731 c_link_args : 00:02:10.731 cpu_instruction_set: native 00:02:10.731 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:10.731 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:02:10.731 enable_docs : false 00:02:10.731 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:10.731 enable_kmods : false 00:02:10.731 tests : false 00:02:10.731 00:02:10.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.731 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:10.731 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.731 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.731 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:10.731 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.731 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.731 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.731 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.731 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.731 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.731 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.731 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.731 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.731 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.731 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.731 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.731 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.731 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.731 [18/265] Linking static target lib/librte_kvargs.a 00:02:10.731 [19/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:10.731 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.731 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.731 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.731 [23/265] Linking static target lib/librte_log.a 00:02:10.731 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.731 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.731 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.731 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.731 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.731 [29/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.732 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.732 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.732 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.732 [33/265] Linking static target lib/librte_pci.a 00:02:10.732 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.732 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.732 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.732 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.995 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.995 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.995 [40/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.995 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:10.995 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.995 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.995 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:10.995 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.995 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.995 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.995 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.995 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:10.995 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.995 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.995 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.995 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.995 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.995 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.995 [56/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.256 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.256 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.256 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.256 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.256 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.256 [62/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.256 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.256 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.256 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.256 [66/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.256 [67/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.256 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.256 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.256 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.256 [71/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.256 [72/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.257 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.257 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.257 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.257 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.257 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.257 [78/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.257 [79/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.257 [80/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.257 [81/265] Linking static target lib/librte_meter.a 00:02:11.257 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.257 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.257 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.257 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.257 [86/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.257 [87/265] Linking static target lib/librte_ring.a 00:02:11.257 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.257 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.257 [90/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.257 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.257 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.257 [93/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.257 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.257 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.257 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.257 [97/265] Linking static target lib/librte_telemetry.a 00:02:11.257 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.257 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.257 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.257 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.257 [102/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.257 [103/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.257 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.257 [105/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.257 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.257 [107/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.257 [108/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.257 [109/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.257 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.257 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.257 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.257 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.257 [114/265] Linking static target lib/librte_cmdline.a 00:02:11.257 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.257 [116/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.257 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.257 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.257 [119/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.257 [120/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.257 [121/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.257 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.257 [123/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.257 [124/265] Linking static target lib/librte_rcu.a 00:02:11.257 [125/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.257 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.257 [127/265] Linking static target lib/librte_mempool.a 00:02:11.257 [128/265] Linking static target lib/librte_timer.a 00:02:11.257 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.257 [130/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.257 [131/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.257 [132/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.257 [133/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.257 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.257 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.257 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.257 [137/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.257 [138/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.257 [139/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.257 [140/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.257 [141/265] Linking static target lib/librte_dmadev.a 00:02:11.257 [142/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.257 [143/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.257 [144/265] Linking static target lib/librte_eal.a 00:02:11.257 [145/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.257 [146/265] Linking static target lib/librte_net.a 00:02:11.257 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.257 [148/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.518 [149/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:11.518 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.518 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.518 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.518 [153/265] Linking static target lib/librte_reorder.a 00:02:11.518 [154/265] Linking static target lib/librte_compressdev.a 00:02:11.518 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.518 [156/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:11.518 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.518 [158/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:11.518 [159/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.518 [160/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.518 [161/265] Linking static target lib/librte_security.a 00:02:11.518 [162/265] Linking static target lib/librte_power.a 00:02:11.518 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:11.518 [164/265] Linking target lib/librte_log.so.24.0 00:02:11.518 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.518 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.518 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:11.518 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.518 [169/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.518 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.518 [171/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.518 [172/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.518 [173/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.518 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:11.518 [175/265] Linking static target lib/librte_mbuf.a 00:02:11.518 [176/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.518 [177/265] Linking static target lib/librte_hash.a 00:02:11.518 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.518 [179/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.518 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.518 [181/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.518 [182/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:11.518 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.518 [184/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.518 [185/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [186/265] Linking target lib/librte_kvargs.so.24.0 00:02:11.778 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.778 [188/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.778 [190/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.778 [191/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [192/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.778 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.778 [194/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.778 [195/265] Linking static target drivers/librte_bus_vdev.a 00:02:11.778 [196/265] Linking static target lib/librte_cryptodev.a 00:02:11.778 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.778 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.778 [199/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:11.778 [200/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:11.778 [202/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.778 [204/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.778 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.778 [206/265] Linking static target drivers/librte_mempool_ring.a 00:02:11.778 [207/265] Linking target lib/librte_telemetry.so.24.0 00:02:11.778 [208/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.778 [209/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.778 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.778 [211/265] Linking static target drivers/librte_bus_pci.a 00:02:12.036 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:12.036 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.036 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.037 [215/265] Linking static target lib/librte_ethdev.a 00:02:12.037 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.037 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.295 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.295 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.295 [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.555 [221/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.555 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.555 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.814 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.383 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.383 [226/265] Linking static target lib/librte_vhost.a 00:02:13.952 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.326 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.896 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.803 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.803 [231/265] Linking target lib/librte_eal.so.24.0 00:02:23.803 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:24.063 [233/265] Linking target lib/librte_pci.so.24.0 00:02:24.063 [234/265] Linking target lib/librte_timer.so.24.0 00:02:24.063 [235/265] Linking target lib/librte_ring.so.24.0 00:02:24.063 [236/265] Linking target lib/librte_meter.so.24.0 00:02:24.063 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:24.063 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:24.063 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:24.063 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:24.063 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:24.063 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:24.063 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:24.063 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:24.063 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:24.063 [246/265] Linking target lib/librte_rcu.so.24.0 00:02:24.321 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:24.321 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:24.321 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:24.321 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:24.581 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:24.581 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:24.581 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:24.581 [254/265] Linking target lib/librte_net.so.24.0 00:02:24.581 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:24.581 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:24.581 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:24.839 [258/265] Linking target lib/librte_hash.so.24.0 00:02:24.839 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:24.839 [260/265] Linking target lib/librte_security.so.24.0 00:02:24.839 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:24.839 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:24.839 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:24.839 [264/265] Linking target lib/librte_power.so.24.0 00:02:24.839 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:25.097 INFO: autodetecting backend as ninja 00:02:25.097 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:26.036 CC lib/log/log.o 00:02:26.036 CC lib/log/log_deprecated.o 00:02:26.036 CC lib/ut_mock/mock.o 00:02:26.036 CC lib/log/log_flags.o 00:02:26.036 CC lib/ut/ut.o 00:02:26.036 LIB libspdk_log.a 00:02:26.036 LIB libspdk_ut_mock.a 00:02:26.036 LIB libspdk_ut.a 00:02:26.036 SO libspdk_log.so.6.1 00:02:26.036 SO libspdk_ut_mock.so.5.0 00:02:26.036 SO libspdk_ut.so.1.0 00:02:26.036 SYMLINK libspdk_ut_mock.so 00:02:26.036 SYMLINK libspdk_log.so 00:02:26.036 SYMLINK libspdk_ut.so 00:02:26.295 CC lib/ioat/ioat.o 00:02:26.295 CC lib/dma/dma.o 00:02:26.295 CC lib/util/cpuset.o 00:02:26.295 CC lib/util/base64.o 00:02:26.295 CC lib/util/bit_array.o 00:02:26.295 CC lib/util/crc32.o 00:02:26.295 CC lib/util/crc16.o 00:02:26.295 CXX lib/trace_parser/trace.o 00:02:26.295 CC lib/util/crc32c.o 00:02:26.295 CC lib/util/dif.o 00:02:26.295 CC lib/util/crc32_ieee.o 00:02:26.295 CC lib/util/crc64.o 00:02:26.295 CC lib/util/fd.o 00:02:26.295 CC lib/util/file.o 00:02:26.295 CC lib/util/hexlify.o 00:02:26.295 CC lib/util/iov.o 00:02:26.295 CC lib/util/math.o 00:02:26.295 CC lib/util/pipe.o 00:02:26.295 CC lib/util/strerror_tls.o 00:02:26.295 CC lib/util/string.o 00:02:26.295 CC lib/util/uuid.o 00:02:26.295 CC lib/util/fd_group.o 00:02:26.295 CC lib/util/xor.o 00:02:26.295 CC lib/util/zipf.o 00:02:26.554 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.554 CC lib/vfio_user/host/vfio_user.o 00:02:26.554 LIB libspdk_dma.a 00:02:26.554 SO libspdk_dma.so.3.0 00:02:26.554 LIB libspdk_ioat.a 00:02:26.554 SYMLINK libspdk_dma.so 00:02:26.554 SO libspdk_ioat.so.6.0 00:02:26.813 SYMLINK libspdk_ioat.so 00:02:26.813 LIB libspdk_vfio_user.a 00:02:26.813 SO libspdk_vfio_user.so.4.0 00:02:26.813 LIB libspdk_util.a 00:02:26.813 SYMLINK libspdk_vfio_user.so 00:02:26.813 SO libspdk_util.so.8.0 00:02:27.074 SYMLINK libspdk_util.so 00:02:27.074 LIB libspdk_trace_parser.a 00:02:27.074 SO libspdk_trace_parser.so.4.0 00:02:27.074 CC lib/conf/conf.o 00:02:27.074 SYMLINK libspdk_trace_parser.so 00:02:27.074 CC lib/vmd/vmd.o 00:02:27.074 CC lib/vmd/led.o 00:02:27.074 CC lib/env_dpdk/env.o 00:02:27.074 CC lib/env_dpdk/pci.o 00:02:27.074 CC lib/env_dpdk/memory.o 00:02:27.074 CC lib/env_dpdk/init.o 00:02:27.074 CC lib/env_dpdk/pci_idxd.o 00:02:27.074 CC lib/env_dpdk/pci_virtio.o 00:02:27.074 CC lib/env_dpdk/threads.o 00:02:27.074 CC lib/env_dpdk/pci_ioat.o 00:02:27.074 CC lib/env_dpdk/pci_vmd.o 00:02:27.074 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:27.074 CC lib/env_dpdk/pci_event.o 00:02:27.074 CC lib/env_dpdk/sigbus_handler.o 00:02:27.074 CC lib/env_dpdk/pci_dpdk.o 00:02:27.333 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.333 CC lib/rdma/common.o 00:02:27.333 CC lib/rdma/rdma_verbs.o 00:02:27.333 CC lib/json/json_parse.o 00:02:27.333 CC lib/idxd/idxd_user.o 00:02:27.333 CC lib/json/json_util.o 00:02:27.333 CC lib/idxd/idxd.o 00:02:27.333 CC lib/json/json_write.o 00:02:27.333 CC lib/idxd/idxd_kernel.o 00:02:27.333 LIB libspdk_conf.a 00:02:27.333 SO libspdk_conf.so.5.0 00:02:27.333 SYMLINK libspdk_conf.so 00:02:27.592 LIB libspdk_json.a 00:02:27.592 LIB libspdk_rdma.a 00:02:27.592 SO libspdk_json.so.5.1 00:02:27.592 SO libspdk_rdma.so.5.0 00:02:27.592 SYMLINK libspdk_json.so 00:02:27.592 SYMLINK libspdk_rdma.so 00:02:27.592 LIB libspdk_idxd.a 00:02:27.592 SO libspdk_idxd.so.11.0 00:02:27.592 LIB libspdk_vmd.a 00:02:27.592 SO libspdk_vmd.so.5.0 00:02:27.851 SYMLINK libspdk_idxd.so 00:02:27.851 SYMLINK libspdk_vmd.so 00:02:27.851 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.851 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.851 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.851 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.110 LIB libspdk_jsonrpc.a 00:02:28.110 SO libspdk_jsonrpc.so.5.1 00:02:28.110 SYMLINK libspdk_jsonrpc.so 00:02:28.110 LIB libspdk_env_dpdk.a 00:02:28.369 SO libspdk_env_dpdk.so.13.0 00:02:28.369 CC lib/rpc/rpc.o 00:02:28.369 SYMLINK libspdk_env_dpdk.so 00:02:28.628 LIB libspdk_rpc.a 00:02:28.628 SO libspdk_rpc.so.5.0 00:02:28.628 SYMLINK libspdk_rpc.so 00:02:28.887 CC lib/sock/sock.o 00:02:28.887 CC lib/sock/sock_rpc.o 00:02:28.887 CC lib/notify/notify.o 00:02:28.887 CC lib/notify/notify_rpc.o 00:02:28.887 CC lib/trace/trace_rpc.o 00:02:28.887 CC lib/trace/trace.o 00:02:28.887 CC lib/trace/trace_flags.o 00:02:28.887 LIB libspdk_notify.a 00:02:29.147 SO libspdk_notify.so.5.0 00:02:29.147 LIB libspdk_trace.a 00:02:29.147 SO libspdk_trace.so.9.0 00:02:29.147 SYMLINK libspdk_notify.so 00:02:29.147 LIB libspdk_sock.a 00:02:29.147 SYMLINK libspdk_trace.so 00:02:29.147 SO libspdk_sock.so.8.0 00:02:29.147 SYMLINK libspdk_sock.so 00:02:29.406 CC lib/thread/iobuf.o 00:02:29.406 CC lib/thread/thread.o 00:02:29.406 CC lib/nvme/nvme_fabric.o 00:02:29.406 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.406 CC lib/nvme/nvme_ctrlr.o 00:02:29.406 CC lib/nvme/nvme_ns_cmd.o 00:02:29.406 CC lib/nvme/nvme_ns.o 00:02:29.406 CC lib/nvme/nvme_pcie_common.o 00:02:29.406 CC lib/nvme/nvme_pcie.o 00:02:29.406 CC lib/nvme/nvme_qpair.o 00:02:29.406 CC lib/nvme/nvme.o 00:02:29.406 CC lib/nvme/nvme_quirks.o 00:02:29.406 CC lib/nvme/nvme_transport.o 00:02:29.406 CC lib/nvme/nvme_discovery.o 00:02:29.406 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.406 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.406 CC lib/nvme/nvme_tcp.o 00:02:29.406 CC lib/nvme/nvme_opal.o 00:02:29.406 CC lib/nvme/nvme_io_msg.o 00:02:29.406 CC lib/nvme/nvme_poll_group.o 00:02:29.406 CC lib/nvme/nvme_zns.o 00:02:29.406 CC lib/nvme/nvme_cuse.o 00:02:29.406 CC lib/nvme/nvme_vfio_user.o 00:02:29.406 CC lib/nvme/nvme_rdma.o 00:02:30.342 LIB libspdk_thread.a 00:02:30.601 SO libspdk_thread.so.9.0 00:02:30.601 SYMLINK libspdk_thread.so 00:02:30.858 CC lib/init/json_config.o 00:02:30.858 CC lib/init/subsystem.o 00:02:30.858 CC lib/init/subsystem_rpc.o 00:02:30.858 CC lib/init/rpc.o 00:02:30.858 CC lib/virtio/virtio.o 00:02:30.858 CC lib/virtio/virtio_vhost_user.o 00:02:30.858 CC lib/virtio/virtio_vfio_user.o 00:02:30.858 CC lib/virtio/virtio_pci.o 00:02:30.858 CC lib/blob/zeroes.o 00:02:30.858 CC lib/accel/accel.o 00:02:30.858 CC lib/blob/blobstore.o 00:02:30.858 CC lib/blob/request.o 00:02:30.858 CC lib/accel/accel_rpc.o 00:02:30.858 CC lib/accel/accel_sw.o 00:02:30.858 CC lib/blob/blob_bs_dev.o 00:02:30.858 LIB libspdk_init.a 00:02:30.858 LIB libspdk_nvme.a 00:02:31.116 SO libspdk_init.so.4.0 00:02:31.116 SYMLINK libspdk_init.so 00:02:31.116 LIB libspdk_virtio.a 00:02:31.116 SO libspdk_virtio.so.6.0 00:02:31.116 SO libspdk_nvme.so.12.0 00:02:31.116 SYMLINK libspdk_virtio.so 00:02:31.374 CC lib/event/log_rpc.o 00:02:31.374 CC lib/event/app.o 00:02:31.374 CC lib/event/reactor.o 00:02:31.374 CC lib/event/scheduler_static.o 00:02:31.374 CC lib/event/app_rpc.o 00:02:31.374 SYMLINK libspdk_nvme.so 00:02:31.631 LIB libspdk_accel.a 00:02:31.631 SO libspdk_accel.so.14.0 00:02:31.631 LIB libspdk_event.a 00:02:31.631 SYMLINK libspdk_accel.so 00:02:31.631 SO libspdk_event.so.12.0 00:02:31.631 SYMLINK libspdk_event.so 00:02:31.891 CC lib/bdev/bdev.o 00:02:31.891 CC lib/bdev/bdev_rpc.o 00:02:31.891 CC lib/bdev/bdev_zone.o 00:02:31.891 CC lib/bdev/part.o 00:02:31.891 CC lib/bdev/scsi_nvme.o 00:02:32.830 LIB libspdk_blob.a 00:02:32.830 SO libspdk_blob.so.10.1 00:02:32.830 SYMLINK libspdk_blob.so 00:02:33.089 CC lib/blobfs/blobfs.o 00:02:33.089 CC lib/blobfs/tree.o 00:02:33.089 CC lib/lvol/lvol.o 00:02:33.657 LIB libspdk_bdev.a 00:02:33.657 LIB libspdk_blobfs.a 00:02:33.657 SO libspdk_bdev.so.14.0 00:02:33.657 SO libspdk_blobfs.so.9.0 00:02:33.657 LIB libspdk_lvol.a 00:02:33.657 SO libspdk_lvol.so.9.1 00:02:33.657 SYMLINK libspdk_bdev.so 00:02:33.657 SYMLINK libspdk_blobfs.so 00:02:33.914 SYMLINK libspdk_lvol.so 00:02:33.914 CC lib/ublk/ublk.o 00:02:33.914 CC lib/ublk/ublk_rpc.o 00:02:33.914 CC lib/scsi/dev.o 00:02:33.914 CC lib/scsi/lun.o 00:02:33.914 CC lib/scsi/scsi_bdev.o 00:02:33.914 CC lib/scsi/port.o 00:02:33.914 CC lib/scsi/scsi.o 00:02:33.914 CC lib/scsi/scsi_pr.o 00:02:33.914 CC lib/scsi/scsi_rpc.o 00:02:33.914 CC lib/scsi/task.o 00:02:33.914 CC lib/nbd/nbd.o 00:02:33.914 CC lib/nbd/nbd_rpc.o 00:02:33.914 CC lib/nvmf/ctrlr_bdev.o 00:02:33.914 CC lib/nvmf/ctrlr.o 00:02:33.914 CC lib/nvmf/ctrlr_discovery.o 00:02:33.914 CC lib/ftl/ftl_core.o 00:02:33.914 CC lib/ftl/ftl_init.o 00:02:33.914 CC lib/nvmf/subsystem.o 00:02:33.914 CC lib/ftl/ftl_layout.o 00:02:33.914 CC lib/nvmf/nvmf.o 00:02:33.914 CC lib/ftl/ftl_debug.o 00:02:33.914 CC lib/nvmf/nvmf_rpc.o 00:02:33.914 CC lib/ftl/ftl_l2p.o 00:02:33.914 CC lib/ftl/ftl_io.o 00:02:33.914 CC lib/nvmf/transport.o 00:02:33.914 CC lib/ftl/ftl_sb.o 00:02:33.914 CC lib/ftl/ftl_nv_cache.o 00:02:33.914 CC lib/ftl/ftl_l2p_flat.o 00:02:33.914 CC lib/nvmf/rdma.o 00:02:33.914 CC lib/nvmf/tcp.o 00:02:33.914 CC lib/ftl/ftl_band.o 00:02:33.914 CC lib/ftl/ftl_band_ops.o 00:02:33.914 CC lib/ftl/ftl_writer.o 00:02:33.914 CC lib/ftl/ftl_rq.o 00:02:33.914 CC lib/ftl/ftl_reloc.o 00:02:33.914 CC lib/ftl/ftl_l2p_cache.o 00:02:33.914 CC lib/ftl/ftl_p2l.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.914 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.914 CC lib/ftl/utils/ftl_conf.o 00:02:33.914 CC lib/ftl/utils/ftl_md.o 00:02:33.914 CC lib/ftl/utils/ftl_mempool.o 00:02:33.914 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.914 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.914 CC lib/ftl/utils/ftl_property.o 00:02:33.914 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.914 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.914 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.914 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.914 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.915 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.915 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.915 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.915 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.915 CC lib/ftl/base/ftl_base_dev.o 00:02:33.915 CC lib/ftl/ftl_trace.o 00:02:33.915 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.481 LIB libspdk_scsi.a 00:02:34.481 LIB libspdk_nbd.a 00:02:34.481 SO libspdk_scsi.so.8.0 00:02:34.481 SO libspdk_nbd.so.6.0 00:02:34.746 SYMLINK libspdk_nbd.so 00:02:34.746 SYMLINK libspdk_scsi.so 00:02:34.746 LIB libspdk_ublk.a 00:02:34.746 SO libspdk_ublk.so.2.0 00:02:34.746 SYMLINK libspdk_ublk.so 00:02:34.746 LIB libspdk_ftl.a 00:02:34.746 CC lib/vhost/vhost.o 00:02:34.746 CC lib/vhost/vhost_rpc.o 00:02:34.746 CC lib/vhost/vhost_blk.o 00:02:34.746 CC lib/vhost/vhost_scsi.o 00:02:34.746 CC lib/vhost/rte_vhost_user.o 00:02:34.746 CC lib/iscsi/iscsi.o 00:02:34.746 CC lib/iscsi/conn.o 00:02:34.746 CC lib/iscsi/init_grp.o 00:02:34.746 CC lib/iscsi/md5.o 00:02:34.746 CC lib/iscsi/param.o 00:02:34.746 CC lib/iscsi/portal_grp.o 00:02:34.746 CC lib/iscsi/iscsi_rpc.o 00:02:34.746 CC lib/iscsi/tgt_node.o 00:02:34.746 CC lib/iscsi/iscsi_subsystem.o 00:02:34.746 CC lib/iscsi/task.o 00:02:35.004 SO libspdk_ftl.so.8.0 00:02:35.263 SYMLINK libspdk_ftl.so 00:02:35.522 LIB libspdk_nvmf.a 00:02:35.522 LIB libspdk_vhost.a 00:02:35.522 SO libspdk_nvmf.so.17.0 00:02:35.522 SO libspdk_vhost.so.7.1 00:02:35.781 SYMLINK libspdk_vhost.so 00:02:35.781 SYMLINK libspdk_nvmf.so 00:02:35.781 LIB libspdk_iscsi.a 00:02:35.781 SO libspdk_iscsi.so.7.0 00:02:36.042 SYMLINK libspdk_iscsi.so 00:02:36.301 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.559 CC module/sock/posix/posix.o 00:02:36.559 CC module/accel/dsa/accel_dsa.o 00:02:36.559 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.559 LIB libspdk_env_dpdk_rpc.a 00:02:36.559 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.559 CC module/accel/ioat/accel_ioat.o 00:02:36.559 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.559 CC module/blob/bdev/blob_bdev.o 00:02:36.559 CC module/accel/error/accel_error.o 00:02:36.559 CC module/accel/error/accel_error_rpc.o 00:02:36.559 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.559 CC module/accel/iaa/accel_iaa.o 00:02:36.559 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.559 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.559 SO libspdk_env_dpdk_rpc.so.5.0 00:02:36.559 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.559 LIB libspdk_accel_error.a 00:02:36.559 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.559 LIB libspdk_scheduler_gscheduler.a 00:02:36.817 LIB libspdk_accel_ioat.a 00:02:36.817 SO libspdk_accel_error.so.1.0 00:02:36.817 SO libspdk_scheduler_gscheduler.so.3.0 00:02:36.817 LIB libspdk_accel_dsa.a 00:02:36.817 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:36.817 LIB libspdk_scheduler_dynamic.a 00:02:36.817 LIB libspdk_accel_iaa.a 00:02:36.817 SO libspdk_accel_ioat.so.5.0 00:02:36.817 SO libspdk_accel_dsa.so.4.0 00:02:36.817 LIB libspdk_blob_bdev.a 00:02:36.817 SO libspdk_accel_iaa.so.2.0 00:02:36.817 SO libspdk_scheduler_dynamic.so.3.0 00:02:36.818 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.818 SYMLINK libspdk_accel_error.so 00:02:36.818 SO libspdk_blob_bdev.so.10.1 00:02:36.818 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.818 SYMLINK libspdk_accel_ioat.so 00:02:36.818 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.818 SYMLINK libspdk_accel_dsa.so 00:02:36.818 SYMLINK libspdk_accel_iaa.so 00:02:36.818 SYMLINK libspdk_blob_bdev.so 00:02:37.076 LIB libspdk_sock_posix.a 00:02:37.076 SO libspdk_sock_posix.so.5.0 00:02:37.076 SYMLINK libspdk_sock_posix.so 00:02:37.076 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.076 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.076 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.076 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.076 CC module/bdev/gpt/gpt.o 00:02:37.076 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.076 CC module/bdev/raid/bdev_raid.o 00:02:37.076 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.076 CC module/bdev/raid/raid0.o 00:02:37.076 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.076 CC module/bdev/raid/raid1.o 00:02:37.076 CC module/bdev/raid/concat.o 00:02:37.076 CC module/bdev/delay/vbdev_delay.o 00:02:37.076 CC module/bdev/malloc/bdev_malloc.o 00:02:37.076 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.076 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.076 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.076 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.076 CC module/bdev/split/vbdev_split.o 00:02:37.076 CC module/bdev/ftl/bdev_ftl.o 00:02:37.076 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.076 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.076 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.076 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.076 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.076 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.076 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.076 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.076 CC module/bdev/aio/bdev_aio_rpc.o 00:02:37.076 CC module/bdev/aio/bdev_aio.o 00:02:37.076 CC module/bdev/null/bdev_null_rpc.o 00:02:37.076 CC module/bdev/error/vbdev_error.o 00:02:37.076 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.076 CC module/bdev/null/bdev_null.o 00:02:37.076 CC module/bdev/nvme/bdev_nvme.o 00:02:37.076 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.076 CC module/bdev/nvme/nvme_rpc.o 00:02:37.076 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.076 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.076 CC module/bdev/nvme/vbdev_opal.o 00:02:37.076 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.333 LIB libspdk_blobfs_bdev.a 00:02:37.333 SO libspdk_blobfs_bdev.so.5.0 00:02:37.333 LIB libspdk_bdev_split.a 00:02:37.333 LIB libspdk_bdev_gpt.a 00:02:37.333 SO libspdk_bdev_split.so.5.0 00:02:37.594 LIB libspdk_bdev_null.a 00:02:37.594 SYMLINK libspdk_blobfs_bdev.so 00:02:37.594 LIB libspdk_bdev_error.a 00:02:37.594 SO libspdk_bdev_gpt.so.5.0 00:02:37.594 LIB libspdk_bdev_ftl.a 00:02:37.595 SYMLINK libspdk_bdev_split.so 00:02:37.595 LIB libspdk_bdev_passthru.a 00:02:37.595 SO libspdk_bdev_null.so.5.0 00:02:37.595 LIB libspdk_bdev_malloc.a 00:02:37.595 LIB libspdk_bdev_aio.a 00:02:37.595 LIB libspdk_bdev_delay.a 00:02:37.595 SO libspdk_bdev_error.so.5.0 00:02:37.595 LIB libspdk_bdev_zone_block.a 00:02:37.595 LIB libspdk_bdev_iscsi.a 00:02:37.595 SYMLINK libspdk_bdev_gpt.so 00:02:37.595 SO libspdk_bdev_malloc.so.5.0 00:02:37.595 SO libspdk_bdev_passthru.so.5.0 00:02:37.595 SO libspdk_bdev_ftl.so.5.0 00:02:37.595 SO libspdk_bdev_aio.so.5.0 00:02:37.595 SO libspdk_bdev_delay.so.5.0 00:02:37.595 SYMLINK libspdk_bdev_null.so 00:02:37.595 SO libspdk_bdev_zone_block.so.5.0 00:02:37.595 SYMLINK libspdk_bdev_error.so 00:02:37.595 SO libspdk_bdev_iscsi.so.5.0 00:02:37.595 LIB libspdk_bdev_lvol.a 00:02:37.595 SYMLINK libspdk_bdev_malloc.so 00:02:37.595 SYMLINK libspdk_bdev_passthru.so 00:02:37.595 SYMLINK libspdk_bdev_ftl.so 00:02:37.595 SYMLINK libspdk_bdev_aio.so 00:02:37.595 SYMLINK libspdk_bdev_delay.so 00:02:37.595 SO libspdk_bdev_lvol.so.5.0 00:02:37.595 SYMLINK libspdk_bdev_zone_block.so 00:02:37.595 SYMLINK libspdk_bdev_iscsi.so 00:02:37.595 LIB libspdk_bdev_virtio.a 00:02:37.595 SYMLINK libspdk_bdev_lvol.so 00:02:37.595 SO libspdk_bdev_virtio.so.5.0 00:02:37.985 SYMLINK libspdk_bdev_virtio.so 00:02:37.985 LIB libspdk_bdev_raid.a 00:02:37.985 SO libspdk_bdev_raid.so.5.0 00:02:37.985 SYMLINK libspdk_bdev_raid.so 00:02:38.966 LIB libspdk_bdev_nvme.a 00:02:38.966 SO libspdk_bdev_nvme.so.6.0 00:02:38.966 SYMLINK libspdk_bdev_nvme.so 00:02:39.223 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.482 CC module/event/subsystems/vmd/vmd.o 00:02:39.482 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.482 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.482 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.482 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.482 CC module/event/subsystems/sock/sock.o 00:02:39.482 LIB libspdk_event_vhost_blk.a 00:02:39.482 LIB libspdk_event_scheduler.a 00:02:39.482 LIB libspdk_event_vmd.a 00:02:39.482 SO libspdk_event_vhost_blk.so.2.0 00:02:39.482 LIB libspdk_event_sock.a 00:02:39.482 LIB libspdk_event_iobuf.a 00:02:39.482 SO libspdk_event_scheduler.so.3.0 00:02:39.482 SO libspdk_event_sock.so.4.0 00:02:39.482 SO libspdk_event_vmd.so.5.0 00:02:39.482 SYMLINK libspdk_event_vhost_blk.so 00:02:39.482 SO libspdk_event_iobuf.so.2.0 00:02:39.482 SYMLINK libspdk_event_scheduler.so 00:02:39.482 SYMLINK libspdk_event_sock.so 00:02:39.740 SYMLINK libspdk_event_vmd.so 00:02:39.740 SYMLINK libspdk_event_iobuf.so 00:02:39.740 CC module/event/subsystems/accel/accel.o 00:02:39.998 LIB libspdk_event_accel.a 00:02:39.998 SO libspdk_event_accel.so.5.0 00:02:39.998 SYMLINK libspdk_event_accel.so 00:02:40.256 CC module/event/subsystems/bdev/bdev.o 00:02:40.514 LIB libspdk_event_bdev.a 00:02:40.514 SO libspdk_event_bdev.so.5.0 00:02:40.514 SYMLINK libspdk_event_bdev.so 00:02:40.773 CC module/event/subsystems/nbd/nbd.o 00:02:40.773 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.773 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.773 CC module/event/subsystems/scsi/scsi.o 00:02:40.773 CC module/event/subsystems/ublk/ublk.o 00:02:41.032 LIB libspdk_event_nbd.a 00:02:41.032 LIB libspdk_event_scsi.a 00:02:41.032 SO libspdk_event_nbd.so.5.0 00:02:41.032 LIB libspdk_event_ublk.a 00:02:41.032 SO libspdk_event_scsi.so.5.0 00:02:41.032 SO libspdk_event_ublk.so.2.0 00:02:41.032 SYMLINK libspdk_event_nbd.so 00:02:41.032 LIB libspdk_event_nvmf.a 00:02:41.032 SYMLINK libspdk_event_scsi.so 00:02:41.032 SO libspdk_event_nvmf.so.5.0 00:02:41.032 SYMLINK libspdk_event_ublk.so 00:02:41.032 SYMLINK libspdk_event_nvmf.so 00:02:41.291 CC module/event/subsystems/iscsi/iscsi.o 00:02:41.291 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:41.551 LIB libspdk_event_vhost_scsi.a 00:02:41.551 LIB libspdk_event_iscsi.a 00:02:41.551 SO libspdk_event_vhost_scsi.so.2.0 00:02:41.551 SO libspdk_event_iscsi.so.5.0 00:02:41.551 SYMLINK libspdk_event_iscsi.so 00:02:41.551 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.811 SO libspdk.so.5.0 00:02:41.811 SYMLINK libspdk.so 00:02:42.075 CC app/spdk_top/spdk_top.o 00:02:42.075 TEST_HEADER include/spdk/accel.h 00:02:42.075 CC app/spdk_nvme_perf/perf.o 00:02:42.075 TEST_HEADER include/spdk/barrier.h 00:02:42.075 TEST_HEADER include/spdk/accel_module.h 00:02:42.075 TEST_HEADER include/spdk/assert.h 00:02:42.075 CC app/trace_record/trace_record.o 00:02:42.075 CXX app/trace/trace.o 00:02:42.075 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.075 TEST_HEADER include/spdk/base64.h 00:02:42.075 CC test/rpc_client/rpc_client_test.o 00:02:42.075 TEST_HEADER include/spdk/bdev.h 00:02:42.076 CC app/spdk_lspci/spdk_lspci.o 00:02:42.076 CC app/spdk_nvme_identify/identify.o 00:02:42.076 TEST_HEADER include/spdk/bdev_module.h 00:02:42.076 TEST_HEADER include/spdk/bit_array.h 00:02:42.076 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.076 TEST_HEADER include/spdk/bit_pool.h 00:02:42.076 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.076 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.076 TEST_HEADER include/spdk/blobfs.h 00:02:42.076 TEST_HEADER include/spdk/blob.h 00:02:42.076 TEST_HEADER include/spdk/conf.h 00:02:42.076 TEST_HEADER include/spdk/config.h 00:02:42.076 TEST_HEADER include/spdk/cpuset.h 00:02:42.076 TEST_HEADER include/spdk/crc16.h 00:02:42.076 TEST_HEADER include/spdk/crc32.h 00:02:42.076 TEST_HEADER include/spdk/crc64.h 00:02:42.076 TEST_HEADER include/spdk/dif.h 00:02:42.076 TEST_HEADER include/spdk/dma.h 00:02:42.076 TEST_HEADER include/spdk/endian.h 00:02:42.076 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.076 TEST_HEADER include/spdk/env.h 00:02:42.076 TEST_HEADER include/spdk/event.h 00:02:42.076 TEST_HEADER include/spdk/fd_group.h 00:02:42.076 TEST_HEADER include/spdk/fd.h 00:02:42.076 TEST_HEADER include/spdk/file.h 00:02:42.076 TEST_HEADER include/spdk/ftl.h 00:02:42.076 TEST_HEADER include/spdk/hexlify.h 00:02:42.076 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.076 TEST_HEADER include/spdk/histogram_data.h 00:02:42.076 TEST_HEADER include/spdk/idxd.h 00:02:42.076 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.076 TEST_HEADER include/spdk/init.h 00:02:42.076 TEST_HEADER include/spdk/ioat.h 00:02:42.076 TEST_HEADER include/spdk/ioat_spec.h 00:02:42.076 TEST_HEADER include/spdk/json.h 00:02:42.076 TEST_HEADER include/spdk/jsonrpc.h 00:02:42.076 TEST_HEADER include/spdk/iscsi_spec.h 00:02:42.076 TEST_HEADER include/spdk/likely.h 00:02:42.076 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.076 TEST_HEADER include/spdk/log.h 00:02:42.076 TEST_HEADER include/spdk/memory.h 00:02:42.076 TEST_HEADER include/spdk/lvol.h 00:02:42.076 TEST_HEADER include/spdk/nbd.h 00:02:42.076 TEST_HEADER include/spdk/mmio.h 00:02:42.076 TEST_HEADER include/spdk/notify.h 00:02:42.076 TEST_HEADER include/spdk/nvme.h 00:02:42.076 CC app/nvmf_tgt/nvmf_main.o 00:02:42.076 TEST_HEADER include/spdk/nvme_intel.h 00:02:42.076 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:42.076 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:42.076 TEST_HEADER include/spdk/nvme_spec.h 00:02:42.076 TEST_HEADER include/spdk/nvme_zns.h 00:02:42.076 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.076 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:42.076 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:42.076 TEST_HEADER include/spdk/nvmf.h 00:02:42.076 TEST_HEADER include/spdk/nvmf_spec.h 00:02:42.076 TEST_HEADER include/spdk/nvmf_transport.h 00:02:42.076 TEST_HEADER include/spdk/opal_spec.h 00:02:42.076 TEST_HEADER include/spdk/opal.h 00:02:42.076 TEST_HEADER include/spdk/pci_ids.h 00:02:42.076 TEST_HEADER include/spdk/pipe.h 00:02:42.076 TEST_HEADER include/spdk/queue.h 00:02:42.076 TEST_HEADER include/spdk/reduce.h 00:02:42.076 TEST_HEADER include/spdk/rpc.h 00:02:42.076 TEST_HEADER include/spdk/scheduler.h 00:02:42.076 CC app/spdk_dd/spdk_dd.o 00:02:42.076 TEST_HEADER include/spdk/scsi.h 00:02:42.076 TEST_HEADER include/spdk/scsi_spec.h 00:02:42.076 TEST_HEADER include/spdk/stdinc.h 00:02:42.076 TEST_HEADER include/spdk/sock.h 00:02:42.076 TEST_HEADER include/spdk/trace.h 00:02:42.076 TEST_HEADER include/spdk/string.h 00:02:42.076 TEST_HEADER include/spdk/thread.h 00:02:42.076 TEST_HEADER include/spdk/trace_parser.h 00:02:42.076 TEST_HEADER include/spdk/tree.h 00:02:42.076 TEST_HEADER include/spdk/ublk.h 00:02:42.076 CC app/spdk_tgt/spdk_tgt.o 00:02:42.076 TEST_HEADER include/spdk/uuid.h 00:02:42.076 TEST_HEADER include/spdk/util.h 00:02:42.076 TEST_HEADER include/spdk/version.h 00:02:42.076 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:42.076 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:42.076 CC app/vhost/vhost.o 00:02:42.076 TEST_HEADER include/spdk/vhost.h 00:02:42.076 TEST_HEADER include/spdk/vmd.h 00:02:42.076 TEST_HEADER include/spdk/xor.h 00:02:42.076 TEST_HEADER include/spdk/zipf.h 00:02:42.076 CXX test/cpp_headers/accel.o 00:02:42.076 CXX test/cpp_headers/accel_module.o 00:02:42.076 CXX test/cpp_headers/assert.o 00:02:42.076 CXX test/cpp_headers/base64.o 00:02:42.076 CXX test/cpp_headers/barrier.o 00:02:42.076 CXX test/cpp_headers/bdev.o 00:02:42.076 CXX test/cpp_headers/bdev_zone.o 00:02:42.076 CXX test/cpp_headers/bdev_module.o 00:02:42.076 CXX test/cpp_headers/bit_array.o 00:02:42.076 CXX test/cpp_headers/bit_pool.o 00:02:42.076 CXX test/cpp_headers/blob_bdev.o 00:02:42.076 CXX test/cpp_headers/blobfs_bdev.o 00:02:42.076 CXX test/cpp_headers/blobfs.o 00:02:42.076 CXX test/cpp_headers/blob.o 00:02:42.076 CXX test/cpp_headers/config.o 00:02:42.076 CXX test/cpp_headers/conf.o 00:02:42.076 CXX test/cpp_headers/cpuset.o 00:02:42.076 CXX test/cpp_headers/crc16.o 00:02:42.076 CXX test/cpp_headers/crc32.o 00:02:42.076 CXX test/cpp_headers/crc64.o 00:02:42.076 CXX test/cpp_headers/dif.o 00:02:42.076 CXX test/cpp_headers/dma.o 00:02:42.076 CXX test/cpp_headers/endian.o 00:02:42.076 CXX test/cpp_headers/env_dpdk.o 00:02:42.076 CXX test/cpp_headers/env.o 00:02:42.076 CXX test/cpp_headers/event.o 00:02:42.076 CXX test/cpp_headers/fd_group.o 00:02:42.076 CXX test/cpp_headers/fd.o 00:02:42.076 CXX test/cpp_headers/file.o 00:02:42.076 CXX test/cpp_headers/ftl.o 00:02:42.076 CXX test/cpp_headers/gpt_spec.o 00:02:42.076 CXX test/cpp_headers/hexlify.o 00:02:42.076 CXX test/cpp_headers/histogram_data.o 00:02:42.076 CXX test/cpp_headers/idxd.o 00:02:42.076 CXX test/cpp_headers/idxd_spec.o 00:02:42.076 CXX test/cpp_headers/init.o 00:02:42.076 CXX test/cpp_headers/ioat.o 00:02:42.076 CC examples/accel/perf/accel_perf.o 00:02:42.076 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.076 CC test/thread/poller_perf/poller_perf.o 00:02:42.076 CC examples/nvme/hello_world/hello_world.o 00:02:42.076 CC examples/nvme/arbitration/arbitration.o 00:02:42.076 CC examples/util/zipf/zipf.o 00:02:42.076 CC test/nvme/overhead/overhead.o 00:02:42.076 CC examples/ioat/verify/verify.o 00:02:42.076 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.076 CC examples/nvme/hotplug/hotplug.o 00:02:42.076 CC test/nvme/aer/aer.o 00:02:42.076 CC examples/nvme/abort/abort.o 00:02:42.076 CC examples/nvme/reconnect/reconnect.o 00:02:42.076 CC test/nvme/sgl/sgl.o 00:02:42.076 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:42.076 CC test/app/jsoncat/jsoncat.o 00:02:42.076 CC examples/ioat/perf/perf.o 00:02:42.076 CC test/nvme/reset/reset.o 00:02:42.076 CC test/app/stub/stub.o 00:02:42.076 CC test/app/histogram_perf/histogram_perf.o 00:02:42.076 CC test/nvme/reserve/reserve.o 00:02:42.076 CC test/nvme/startup/startup.o 00:02:42.076 CC test/env/pci/pci_ut.o 00:02:42.076 CC test/nvme/simple_copy/simple_copy.o 00:02:42.076 CC examples/blob/hello_world/hello_blob.o 00:02:42.076 CC test/env/memory/memory_ut.o 00:02:42.076 CC test/nvme/e2edp/nvme_dp.o 00:02:42.076 CC examples/sock/hello_world/hello_sock.o 00:02:42.076 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.076 CC test/env/vtophys/vtophys.o 00:02:42.076 CC test/nvme/fdp/fdp.o 00:02:42.076 CC test/nvme/connect_stress/connect_stress.o 00:02:42.076 CC test/event/reactor/reactor.o 00:02:42.076 CC test/nvme/compliance/nvme_compliance.o 00:02:42.076 CC test/event/event_perf/event_perf.o 00:02:42.076 CC app/fio/nvme/fio_plugin.o 00:02:42.076 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.076 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.076 CC examples/blob/cli/blobcli.o 00:02:42.076 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.076 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.076 CC examples/idxd/perf/perf.o 00:02:42.076 CC test/event/reactor_perf/reactor_perf.o 00:02:42.076 CC test/nvme/err_injection/err_injection.o 00:02:42.076 CC test/nvme/cuse/cuse.o 00:02:42.347 CC examples/vmd/led/led.o 00:02:42.347 CC test/bdev/bdevio/bdevio.o 00:02:42.347 CC test/nvme/boot_partition/boot_partition.o 00:02:42.347 CC test/event/app_repeat/app_repeat.o 00:02:42.347 CC test/blobfs/mkfs/mkfs.o 00:02:42.347 CC test/app/bdev_svc/bdev_svc.o 00:02:42.347 CC examples/nvmf/nvmf/nvmf.o 00:02:42.347 CC examples/bdev/bdevperf/bdevperf.o 00:02:42.347 CC test/accel/dif/dif.o 00:02:42.347 CC examples/thread/thread/thread_ex.o 00:02:42.347 CC test/event/scheduler/scheduler.o 00:02:42.347 CC app/fio/bdev/fio_plugin.o 00:02:42.347 CC test/dma/test_dma/test_dma.o 00:02:42.347 LINK spdk_lspci 00:02:42.347 CC test/lvol/esnap/esnap.o 00:02:42.347 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.347 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.347 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.611 LINK rpc_client_test 00:02:42.611 LINK spdk_nvme_discover 00:02:42.611 LINK vhost 00:02:42.611 LINK interrupt_tgt 00:02:42.611 LINK nvmf_tgt 00:02:42.611 LINK zipf 00:02:42.611 LINK env_dpdk_post_init 00:02:42.611 LINK reactor 00:02:42.611 LINK iscsi_tgt 00:02:42.611 LINK lsvmd 00:02:42.611 LINK vtophys 00:02:42.611 LINK reactor_perf 00:02:42.611 LINK spdk_tgt 00:02:42.611 LINK event_perf 00:02:42.611 LINK poller_perf 00:02:42.611 LINK led 00:02:42.611 LINK spdk_trace_record 00:02:42.611 LINK jsoncat 00:02:42.611 LINK histogram_perf 00:02:42.611 LINK boot_partition 00:02:42.611 LINK cmb_copy 00:02:42.611 LINK startup 00:02:42.611 LINK stub 00:02:42.874 LINK app_repeat 00:02:42.874 LINK bdev_svc 00:02:42.874 LINK doorbell_aers 00:02:42.874 LINK pmr_persistence 00:02:42.874 LINK connect_stress 00:02:42.874 LINK reserve 00:02:42.874 CXX test/cpp_headers/ioat_spec.o 00:02:42.874 LINK mkfs 00:02:42.874 CXX test/cpp_headers/iscsi_spec.o 00:02:42.874 CXX test/cpp_headers/json.o 00:02:42.874 LINK hello_blob 00:02:42.874 CXX test/cpp_headers/jsonrpc.o 00:02:42.874 CXX test/cpp_headers/likely.o 00:02:42.874 CXX test/cpp_headers/log.o 00:02:42.874 LINK fused_ordering 00:02:42.874 CXX test/cpp_headers/lvol.o 00:02:42.874 LINK err_injection 00:02:42.874 CXX test/cpp_headers/memory.o 00:02:42.874 LINK verify 00:02:42.874 CXX test/cpp_headers/mmio.o 00:02:42.874 LINK hello_bdev 00:02:42.874 CXX test/cpp_headers/nbd.o 00:02:42.874 LINK simple_copy 00:02:42.874 CXX test/cpp_headers/notify.o 00:02:42.874 CXX test/cpp_headers/nvme.o 00:02:42.874 CXX test/cpp_headers/nvme_intel.o 00:02:42.874 LINK ioat_perf 00:02:42.874 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.874 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.874 CXX test/cpp_headers/nvme_spec.o 00:02:42.874 CXX test/cpp_headers/nvme_zns.o 00:02:42.874 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.874 LINK hello_world 00:02:42.874 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.874 CXX test/cpp_headers/nvmf.o 00:02:42.874 CXX test/cpp_headers/nvmf_spec.o 00:02:42.874 CXX test/cpp_headers/nvmf_transport.o 00:02:42.874 CXX test/cpp_headers/opal.o 00:02:42.874 CXX test/cpp_headers/opal_spec.o 00:02:42.874 CXX test/cpp_headers/pci_ids.o 00:02:42.874 CXX test/cpp_headers/pipe.o 00:02:42.874 CXX test/cpp_headers/queue.o 00:02:42.874 LINK scheduler 00:02:42.874 CXX test/cpp_headers/rpc.o 00:02:42.874 CXX test/cpp_headers/reduce.o 00:02:42.874 CXX test/cpp_headers/scheduler.o 00:02:42.874 CXX test/cpp_headers/scsi.o 00:02:42.874 LINK hello_sock 00:02:42.874 CXX test/cpp_headers/scsi_spec.o 00:02:42.874 LINK sgl 00:02:42.874 CXX test/cpp_headers/sock.o 00:02:42.874 LINK hotplug 00:02:42.874 CXX test/cpp_headers/stdinc.o 00:02:42.874 LINK nvme_dp 00:02:42.874 LINK reset 00:02:42.874 CXX test/cpp_headers/string.o 00:02:42.874 CXX test/cpp_headers/thread.o 00:02:42.874 LINK aer 00:02:42.874 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.874 CXX test/cpp_headers/trace.o 00:02:42.874 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.874 LINK thread 00:02:42.874 LINK overhead 00:02:42.874 LINK arbitration 00:02:42.874 LINK fdp 00:02:42.874 CXX test/cpp_headers/trace_parser.o 00:02:42.874 LINK nvme_compliance 00:02:43.134 LINK nvmf 00:02:43.134 LINK spdk_dd 00:02:43.134 LINK spdk_trace 00:02:43.134 LINK reconnect 00:02:43.134 CXX test/cpp_headers/tree.o 00:02:43.134 LINK idxd_perf 00:02:43.134 CXX test/cpp_headers/ublk.o 00:02:43.134 CXX test/cpp_headers/util.o 00:02:43.134 CXX test/cpp_headers/uuid.o 00:02:43.134 LINK dif 00:02:43.134 CXX test/cpp_headers/version.o 00:02:43.134 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.134 LINK test_dma 00:02:43.134 CXX test/cpp_headers/vhost.o 00:02:43.134 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.134 CXX test/cpp_headers/vmd.o 00:02:43.134 CXX test/cpp_headers/xor.o 00:02:43.134 LINK pci_ut 00:02:43.134 LINK bdevio 00:02:43.134 CXX test/cpp_headers/zipf.o 00:02:43.134 LINK abort 00:02:43.134 LINK accel_perf 00:02:43.134 LINK blobcli 00:02:43.392 LINK spdk_bdev 00:02:43.392 LINK nvme_manage 00:02:43.392 LINK nvme_fuzz 00:02:43.392 LINK spdk_nvme 00:02:43.392 LINK mem_callbacks 00:02:43.392 LINK spdk_nvme_identify 00:02:43.392 LINK spdk_top 00:02:43.652 LINK spdk_nvme_perf 00:02:43.652 LINK vhost_fuzz 00:02:43.652 LINK bdevperf 00:02:43.652 LINK cuse 00:02:43.652 LINK memory_ut 00:02:44.219 LINK iscsi_fuzz 00:02:46.120 LINK esnap 00:02:46.379 00:02:46.379 real 0m44.719s 00:02:46.379 user 6m13.525s 00:02:46.379 sys 3m55.415s 00:02:46.379 11:30:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:46.379 11:30:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.379 ************************************ 00:02:46.379 END TEST make 00:02:46.379 ************************************ 00:02:46.640 11:30:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:46.640 11:30:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:46.640 11:30:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:46.640 11:30:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:46.640 11:30:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:46.640 11:30:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:46.640 11:30:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:46.640 11:30:17 -- scripts/common.sh@335 -- # IFS=.-: 00:02:46.640 11:30:17 -- scripts/common.sh@335 -- # read -ra ver1 00:02:46.640 11:30:17 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.640 11:30:17 -- scripts/common.sh@336 -- # read -ra ver2 00:02:46.640 11:30:17 -- scripts/common.sh@337 -- # local 'op=<' 00:02:46.640 11:30:17 -- scripts/common.sh@339 -- # ver1_l=2 00:02:46.640 11:30:17 -- scripts/common.sh@340 -- # ver2_l=1 00:02:46.640 11:30:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:46.640 11:30:17 -- scripts/common.sh@343 -- # case "$op" in 00:02:46.640 11:30:17 -- scripts/common.sh@344 -- # : 1 00:02:46.640 11:30:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:46.640 11:30:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.640 11:30:17 -- scripts/common.sh@364 -- # decimal 1 00:02:46.640 11:30:17 -- scripts/common.sh@352 -- # local d=1 00:02:46.640 11:30:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.640 11:30:17 -- scripts/common.sh@354 -- # echo 1 00:02:46.640 11:30:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:46.640 11:30:17 -- scripts/common.sh@365 -- # decimal 2 00:02:46.640 11:30:17 -- scripts/common.sh@352 -- # local d=2 00:02:46.640 11:30:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.640 11:30:17 -- scripts/common.sh@354 -- # echo 2 00:02:46.640 11:30:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:46.640 11:30:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:46.640 11:30:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:46.640 11:30:17 -- scripts/common.sh@367 -- # return 0 00:02:46.640 11:30:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.640 11:30:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.640 --rc genhtml_branch_coverage=1 00:02:46.640 --rc genhtml_function_coverage=1 00:02:46.640 --rc genhtml_legend=1 00:02:46.640 --rc geninfo_all_blocks=1 00:02:46.640 --rc geninfo_unexecuted_blocks=1 00:02:46.640 00:02:46.640 ' 00:02:46.640 11:30:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.640 --rc genhtml_branch_coverage=1 00:02:46.640 --rc genhtml_function_coverage=1 00:02:46.640 --rc genhtml_legend=1 00:02:46.640 --rc geninfo_all_blocks=1 00:02:46.640 --rc geninfo_unexecuted_blocks=1 00:02:46.640 00:02:46.640 ' 00:02:46.640 11:30:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.640 --rc genhtml_branch_coverage=1 00:02:46.640 --rc genhtml_function_coverage=1 00:02:46.640 --rc genhtml_legend=1 00:02:46.640 --rc geninfo_all_blocks=1 00:02:46.640 --rc geninfo_unexecuted_blocks=1 00:02:46.640 00:02:46.640 ' 00:02:46.640 11:30:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.640 --rc genhtml_branch_coverage=1 00:02:46.640 --rc genhtml_function_coverage=1 00:02:46.640 --rc genhtml_legend=1 00:02:46.640 --rc geninfo_all_blocks=1 00:02:46.640 --rc geninfo_unexecuted_blocks=1 00:02:46.640 00:02:46.640 ' 00:02:46.640 11:30:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.640 11:30:17 -- nvmf/common.sh@7 -- # uname -s 00:02:46.640 11:30:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.640 11:30:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.640 11:30:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.640 11:30:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.640 11:30:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.640 11:30:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.640 11:30:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.640 11:30:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.640 11:30:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.640 11:30:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.640 11:30:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:46.640 11:30:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:46.640 11:30:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.640 11:30:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.640 11:30:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:46.640 11:30:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:46.640 11:30:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.640 11:30:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.640 11:30:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.640 11:30:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.640 11:30:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.640 11:30:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.640 11:30:17 -- paths/export.sh@5 -- # export PATH 00:02:46.640 11:30:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.640 11:30:17 -- nvmf/common.sh@46 -- # : 0 00:02:46.640 11:30:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:46.640 11:30:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:46.640 11:30:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:46.640 11:30:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.640 11:30:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.640 11:30:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:46.640 11:30:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:46.640 11:30:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:46.640 11:30:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.640 11:30:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.640 11:30:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.640 11:30:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.640 11:30:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:46.640 11:30:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.640 11:30:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:46.640 11:30:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.640 11:30:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.640 11:30:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.640 11:30:17 -- spdk/autotest.sh@48 -- # udevadm_pid=3516627 00:02:46.640 11:30:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.640 11:30:17 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:46.640 11:30:17 -- spdk/autotest.sh@54 -- # echo 3516629 00:02:46.640 11:30:17 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:46.640 11:30:17 -- spdk/autotest.sh@56 -- # echo 3516630 00:02:46.640 11:30:17 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:46.640 11:30:17 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:46.640 11:30:17 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:46.640 11:30:17 -- spdk/autotest.sh@60 -- # echo 3516631 00:02:46.640 11:30:17 -- spdk/autotest.sh@62 -- # echo 3516632 00:02:46.640 11:30:17 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:46.640 11:30:17 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:46.640 11:30:17 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:46.640 11:30:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:46.640 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:02:46.640 11:30:17 -- spdk/autotest.sh@70 -- # create_test_list 00:02:46.640 11:30:17 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:46.641 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:02:46.641 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:46.641 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:46.900 11:30:17 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:46.900 11:30:17 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:46.900 11:30:17 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:46.900 11:30:17 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:46.900 11:30:17 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:46.900 11:30:17 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:46.900 11:30:17 -- common/autotest_common.sh@1450 -- # uname 00:02:46.900 11:30:17 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:46.900 11:30:17 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:46.900 11:30:17 -- common/autotest_common.sh@1470 -- # uname 00:02:46.900 11:30:17 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:46.900 11:30:17 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:46.900 11:30:17 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:46.900 lcov: LCOV version 1.15 00:02:46.900 11:30:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:49.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:49.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:49.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:49.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:49.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:49.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:11.372 11:30:39 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:11.372 11:30:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.372 11:30:39 -- common/autotest_common.sh@10 -- # set +x 00:03:11.372 11:30:39 -- spdk/autotest.sh@89 -- # rm -f 00:03:11.372 11:30:39 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.308 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:12.308 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:12.567 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:12.825 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:12.825 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:12.825 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:12.825 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:12.825 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:12.825 11:30:43 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:12.825 11:30:43 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:12.825 11:30:43 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:12.825 11:30:43 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:12.825 11:30:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:12.825 11:30:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:12.825 11:30:43 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:12.825 11:30:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:12.825 11:30:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:12.825 11:30:43 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:12.825 11:30:43 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:12.825 11:30:43 -- spdk/autotest.sh@108 -- # grep -v p 00:03:12.825 11:30:43 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:12.825 11:30:43 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:12.826 11:30:43 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:12.826 11:30:43 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:12.826 11:30:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:12.826 No valid GPT data, bailing 00:03:12.826 11:30:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:12.826 11:30:43 -- scripts/common.sh@393 -- # pt= 00:03:12.826 11:30:43 -- scripts/common.sh@394 -- # return 1 00:03:12.826 11:30:43 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:12.826 1+0 records in 00:03:12.826 1+0 records out 00:03:12.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047354 s, 221 MB/s 00:03:12.826 11:30:43 -- spdk/autotest.sh@116 -- # sync 00:03:12.826 11:30:43 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:12.826 11:30:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:12.826 11:30:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.386 11:30:49 -- spdk/autotest.sh@122 -- # uname -s 00:03:19.386 11:30:49 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:19.386 11:30:49 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.386 11:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.386 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:03:19.386 ************************************ 00:03:19.386 START TEST setup.sh 00:03:19.386 ************************************ 00:03:19.386 11:30:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.386 * Looking for test storage... 00:03:19.386 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:19.386 11:30:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:19.386 11:30:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:19.386 11:30:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:19.386 11:30:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:19.386 11:30:49 -- scripts/common.sh@335 -- # IFS=.-: 00:03:19.386 11:30:49 -- scripts/common.sh@335 -- # read -ra ver1 00:03:19.386 11:30:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.386 11:30:49 -- scripts/common.sh@336 -- # read -ra ver2 00:03:19.386 11:30:49 -- scripts/common.sh@337 -- # local 'op=<' 00:03:19.386 11:30:49 -- scripts/common.sh@339 -- # ver1_l=2 00:03:19.386 11:30:49 -- scripts/common.sh@340 -- # ver2_l=1 00:03:19.386 11:30:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:19.386 11:30:49 -- scripts/common.sh@343 -- # case "$op" in 00:03:19.386 11:30:49 -- scripts/common.sh@344 -- # : 1 00:03:19.386 11:30:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:19.386 11:30:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.386 11:30:49 -- scripts/common.sh@364 -- # decimal 1 00:03:19.386 11:30:49 -- scripts/common.sh@352 -- # local d=1 00:03:19.386 11:30:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.386 11:30:49 -- scripts/common.sh@354 -- # echo 1 00:03:19.386 11:30:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:19.386 11:30:49 -- scripts/common.sh@365 -- # decimal 2 00:03:19.386 11:30:49 -- scripts/common.sh@352 -- # local d=2 00:03:19.386 11:30:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.386 11:30:49 -- scripts/common.sh@354 -- # echo 2 00:03:19.386 11:30:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:19.386 11:30:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:19.386 11:30:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:19.386 11:30:49 -- scripts/common.sh@367 -- # return 0 00:03:19.386 11:30:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.386 --rc genhtml_branch_coverage=1 00:03:19.386 --rc genhtml_function_coverage=1 00:03:19.386 --rc genhtml_legend=1 00:03:19.386 --rc geninfo_all_blocks=1 00:03:19.386 --rc geninfo_unexecuted_blocks=1 00:03:19.386 00:03:19.386 ' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.386 --rc genhtml_branch_coverage=1 00:03:19.386 --rc genhtml_function_coverage=1 00:03:19.386 --rc genhtml_legend=1 00:03:19.386 --rc geninfo_all_blocks=1 00:03:19.386 --rc geninfo_unexecuted_blocks=1 00:03:19.386 00:03:19.386 ' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.386 --rc genhtml_branch_coverage=1 00:03:19.386 --rc genhtml_function_coverage=1 00:03:19.386 --rc genhtml_legend=1 00:03:19.386 --rc geninfo_all_blocks=1 00:03:19.386 --rc geninfo_unexecuted_blocks=1 00:03:19.386 00:03:19.386 ' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.386 --rc genhtml_branch_coverage=1 00:03:19.386 --rc genhtml_function_coverage=1 00:03:19.386 --rc genhtml_legend=1 00:03:19.386 --rc geninfo_all_blocks=1 00:03:19.386 --rc geninfo_unexecuted_blocks=1 00:03:19.386 00:03:19.386 ' 00:03:19.386 11:30:49 -- setup/test-setup.sh@10 -- # uname -s 00:03:19.386 11:30:49 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:19.386 11:30:49 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:19.386 11:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.386 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:03:19.386 ************************************ 00:03:19.386 START TEST acl 00:03:19.386 ************************************ 00:03:19.386 11:30:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:19.386 * Looking for test storage... 00:03:19.386 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:19.386 11:30:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:19.386 11:30:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:19.386 11:30:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:19.387 11:30:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:19.387 11:30:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:19.387 11:30:49 -- scripts/common.sh@335 -- # IFS=.-: 00:03:19.387 11:30:49 -- scripts/common.sh@335 -- # read -ra ver1 00:03:19.387 11:30:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.387 11:30:49 -- scripts/common.sh@336 -- # read -ra ver2 00:03:19.387 11:30:49 -- scripts/common.sh@337 -- # local 'op=<' 00:03:19.387 11:30:49 -- scripts/common.sh@339 -- # ver1_l=2 00:03:19.387 11:30:49 -- scripts/common.sh@340 -- # ver2_l=1 00:03:19.387 11:30:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:19.387 11:30:49 -- scripts/common.sh@343 -- # case "$op" in 00:03:19.387 11:30:49 -- scripts/common.sh@344 -- # : 1 00:03:19.387 11:30:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:19.387 11:30:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.387 11:30:49 -- scripts/common.sh@364 -- # decimal 1 00:03:19.387 11:30:49 -- scripts/common.sh@352 -- # local d=1 00:03:19.387 11:30:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.387 11:30:49 -- scripts/common.sh@354 -- # echo 1 00:03:19.387 11:30:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:19.387 11:30:49 -- scripts/common.sh@365 -- # decimal 2 00:03:19.387 11:30:49 -- scripts/common.sh@352 -- # local d=2 00:03:19.387 11:30:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.387 11:30:49 -- scripts/common.sh@354 -- # echo 2 00:03:19.387 11:30:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:19.387 11:30:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:19.387 11:30:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:19.387 11:30:49 -- scripts/common.sh@367 -- # return 0 00:03:19.387 11:30:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.387 11:30:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.387 --rc genhtml_branch_coverage=1 00:03:19.387 --rc genhtml_function_coverage=1 00:03:19.387 --rc genhtml_legend=1 00:03:19.387 --rc geninfo_all_blocks=1 00:03:19.387 --rc geninfo_unexecuted_blocks=1 00:03:19.387 00:03:19.387 ' 00:03:19.387 11:30:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.387 --rc genhtml_branch_coverage=1 00:03:19.387 --rc genhtml_function_coverage=1 00:03:19.387 --rc genhtml_legend=1 00:03:19.387 --rc geninfo_all_blocks=1 00:03:19.387 --rc geninfo_unexecuted_blocks=1 00:03:19.387 00:03:19.387 ' 00:03:19.387 11:30:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.387 --rc genhtml_branch_coverage=1 00:03:19.387 --rc genhtml_function_coverage=1 00:03:19.387 --rc genhtml_legend=1 00:03:19.387 --rc geninfo_all_blocks=1 00:03:19.387 --rc geninfo_unexecuted_blocks=1 00:03:19.387 00:03:19.387 ' 00:03:19.387 11:30:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:19.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.387 --rc genhtml_branch_coverage=1 00:03:19.387 --rc genhtml_function_coverage=1 00:03:19.387 --rc genhtml_legend=1 00:03:19.387 --rc geninfo_all_blocks=1 00:03:19.387 --rc geninfo_unexecuted_blocks=1 00:03:19.387 00:03:19.387 ' 00:03:19.387 11:30:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:19.387 11:30:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:19.387 11:30:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:19.387 11:30:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:19.387 11:30:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:19.387 11:30:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:19.387 11:30:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:19.387 11:30:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.387 11:30:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:19.387 11:30:49 -- setup/acl.sh@12 -- # devs=() 00:03:19.387 11:30:49 -- setup/acl.sh@12 -- # declare -a devs 00:03:19.387 11:30:49 -- setup/acl.sh@13 -- # drivers=() 00:03:19.387 11:30:49 -- setup/acl.sh@13 -- # declare -A drivers 00:03:19.387 11:30:49 -- setup/acl.sh@51 -- # setup reset 00:03:19.387 11:30:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.387 11:30:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.577 11:30:53 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:23.577 11:30:53 -- setup/acl.sh@16 -- # local dev driver 00:03:23.577 11:30:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.577 11:30:53 -- setup/acl.sh@15 -- # setup output status 00:03:23.577 11:30:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.577 11:30:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:26.110 Hugepages 00:03:26.110 node hugesize free / total 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 00:03:26.110 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.110 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.110 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.110 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # continue 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:26.370 11:30:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:26.370 11:30:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:26.370 11:30:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:26.370 11:30:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.370 11:30:56 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:26.370 11:30:56 -- setup/acl.sh@54 -- # run_test denied denied 00:03:26.370 11:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.370 11:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.370 11:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:26.370 ************************************ 00:03:26.370 START TEST denied 00:03:26.370 ************************************ 00:03:26.370 11:30:56 -- common/autotest_common.sh@1114 -- # denied 00:03:26.370 11:30:56 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:26.370 11:30:56 -- setup/acl.sh@38 -- # setup output config 00:03:26.370 11:30:56 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:26.370 11:30:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.370 11:30:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:30.562 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:30.562 11:31:00 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:30.562 11:31:00 -- setup/acl.sh@28 -- # local dev driver 00:03:30.562 11:31:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.562 11:31:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:30.562 11:31:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:30.562 11:31:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.562 11:31:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.562 11:31:00 -- setup/acl.sh@41 -- # setup reset 00:03:30.562 11:31:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.562 11:31:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.751 00:03:34.751 real 0m8.357s 00:03:34.751 user 0m2.686s 00:03:34.751 sys 0m5.031s 00:03:34.751 11:31:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.751 11:31:05 -- common/autotest_common.sh@10 -- # set +x 00:03:34.751 ************************************ 00:03:34.751 END TEST denied 00:03:34.751 ************************************ 00:03:34.751 11:31:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.751 11:31:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.751 11:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.751 11:31:05 -- common/autotest_common.sh@10 -- # set +x 00:03:34.751 ************************************ 00:03:34.751 START TEST allowed 00:03:34.751 ************************************ 00:03:34.751 11:31:05 -- common/autotest_common.sh@1114 -- # allowed 00:03:34.751 11:31:05 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:34.751 11:31:05 -- setup/acl.sh@45 -- # setup output config 00:03:34.751 11:31:05 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:34.751 11:31:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.751 11:31:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:40.104 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:40.104 11:31:10 -- setup/acl.sh@47 -- # verify 00:03:40.104 11:31:10 -- setup/acl.sh@28 -- # local dev driver 00:03:40.104 11:31:10 -- setup/acl.sh@48 -- # setup reset 00:03:40.104 11:31:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.104 11:31:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.388 00:03:43.388 real 0m8.506s 00:03:43.388 user 0m1.993s 00:03:43.388 sys 0m4.489s 00:03:43.388 11:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.388 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:43.388 ************************************ 00:03:43.388 END TEST allowed 00:03:43.388 ************************************ 00:03:43.388 00:03:43.388 real 0m24.113s 00:03:43.388 user 0m7.244s 00:03:43.388 sys 0m14.372s 00:03:43.388 11:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.388 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:43.388 ************************************ 00:03:43.388 END TEST acl 00:03:43.388 ************************************ 00:03:43.388 11:31:13 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:43.388 11:31:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.388 11:31:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.388 11:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:43.388 ************************************ 00:03:43.388 START TEST hugepages 00:03:43.388 ************************************ 00:03:43.388 11:31:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:43.388 * Looking for test storage... 00:03:43.647 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:43.647 11:31:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:43.647 11:31:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:43.647 11:31:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:43.647 11:31:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:43.647 11:31:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:43.647 11:31:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:43.647 11:31:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:43.647 11:31:14 -- scripts/common.sh@335 -- # IFS=.-: 00:03:43.647 11:31:14 -- scripts/common.sh@335 -- # read -ra ver1 00:03:43.647 11:31:14 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.648 11:31:14 -- scripts/common.sh@336 -- # read -ra ver2 00:03:43.648 11:31:14 -- scripts/common.sh@337 -- # local 'op=<' 00:03:43.648 11:31:14 -- scripts/common.sh@339 -- # ver1_l=2 00:03:43.648 11:31:14 -- scripts/common.sh@340 -- # ver2_l=1 00:03:43.648 11:31:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:43.648 11:31:14 -- scripts/common.sh@343 -- # case "$op" in 00:03:43.648 11:31:14 -- scripts/common.sh@344 -- # : 1 00:03:43.648 11:31:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:43.648 11:31:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.648 11:31:14 -- scripts/common.sh@364 -- # decimal 1 00:03:43.648 11:31:14 -- scripts/common.sh@352 -- # local d=1 00:03:43.648 11:31:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.648 11:31:14 -- scripts/common.sh@354 -- # echo 1 00:03:43.648 11:31:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:43.648 11:31:14 -- scripts/common.sh@365 -- # decimal 2 00:03:43.648 11:31:14 -- scripts/common.sh@352 -- # local d=2 00:03:43.648 11:31:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.648 11:31:14 -- scripts/common.sh@354 -- # echo 2 00:03:43.648 11:31:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:43.648 11:31:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:43.648 11:31:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:43.648 11:31:14 -- scripts/common.sh@367 -- # return 0 00:03:43.648 11:31:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.648 11:31:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.648 --rc genhtml_branch_coverage=1 00:03:43.648 --rc genhtml_function_coverage=1 00:03:43.648 --rc genhtml_legend=1 00:03:43.648 --rc geninfo_all_blocks=1 00:03:43.648 --rc geninfo_unexecuted_blocks=1 00:03:43.648 00:03:43.648 ' 00:03:43.648 11:31:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.648 --rc genhtml_branch_coverage=1 00:03:43.648 --rc genhtml_function_coverage=1 00:03:43.648 --rc genhtml_legend=1 00:03:43.648 --rc geninfo_all_blocks=1 00:03:43.648 --rc geninfo_unexecuted_blocks=1 00:03:43.648 00:03:43.648 ' 00:03:43.648 11:31:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.648 --rc genhtml_branch_coverage=1 00:03:43.648 --rc genhtml_function_coverage=1 00:03:43.648 --rc genhtml_legend=1 00:03:43.648 --rc geninfo_all_blocks=1 00:03:43.648 --rc geninfo_unexecuted_blocks=1 00:03:43.648 00:03:43.648 ' 00:03:43.648 11:31:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.648 --rc genhtml_branch_coverage=1 00:03:43.648 --rc genhtml_function_coverage=1 00:03:43.648 --rc genhtml_legend=1 00:03:43.648 --rc geninfo_all_blocks=1 00:03:43.648 --rc geninfo_unexecuted_blocks=1 00:03:43.648 00:03:43.648 ' 00:03:43.648 11:31:14 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.648 11:31:14 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.648 11:31:14 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.648 11:31:14 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.648 11:31:14 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.648 11:31:14 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.648 11:31:14 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.648 11:31:14 -- setup/common.sh@18 -- # local node= 00:03:43.648 11:31:14 -- setup/common.sh@19 -- # local var val 00:03:43.648 11:31:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.648 11:31:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.648 11:31:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.648 11:31:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.648 11:31:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.648 11:31:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 42015364 kB' 'MemAvailable: 45722068 kB' 'Buffers: 4100 kB' 'Cached: 9965172 kB' 'SwapCached: 0 kB' 'Active: 6743812 kB' 'Inactive: 3693068 kB' 'Active(anon): 6349396 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471036 kB' 'Mapped: 174252 kB' 'Shmem: 5881788 kB' 'KReclaimable: 232620 kB' 'Slab: 1065332 kB' 'SReclaimable: 232620 kB' 'SUnreclaim: 832712 kB' 'KernelStack: 21952 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433344 kB' 'Committed_AS: 7520844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217756 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.648 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.648 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # continue 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.649 11:31:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.649 11:31:14 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.649 11:31:14 -- setup/common.sh@33 -- # echo 2048 00:03:43.649 11:31:14 -- setup/common.sh@33 -- # return 0 00:03:43.649 11:31:14 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.649 11:31:14 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.649 11:31:14 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.649 11:31:14 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.649 11:31:14 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.649 11:31:14 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.649 11:31:14 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.649 11:31:14 -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.649 11:31:14 -- setup/hugepages.sh@27 -- # local node 00:03:43.649 11:31:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.649 11:31:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.649 11:31:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.649 11:31:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.649 11:31:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.649 11:31:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.649 11:31:14 -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.649 11:31:14 -- setup/hugepages.sh@37 -- # local node hp 00:03:43.649 11:31:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.649 11:31:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.649 11:31:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.649 11:31:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.649 11:31:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.649 11:31:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.649 11:31:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.649 11:31:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.650 11:31:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.650 11:31:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:43.650 11:31:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.650 11:31:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.650 11:31:14 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.650 11:31:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.650 11:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.650 11:31:14 -- common/autotest_common.sh@10 -- # set +x 00:03:43.650 ************************************ 00:03:43.650 START TEST default_setup 00:03:43.650 ************************************ 00:03:43.650 11:31:14 -- common/autotest_common.sh@1114 -- # default_setup 00:03:43.650 11:31:14 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.650 11:31:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.650 11:31:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.650 11:31:14 -- setup/hugepages.sh@51 -- # shift 00:03:43.650 11:31:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.650 11:31:14 -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.650 11:31:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.650 11:31:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.650 11:31:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.650 11:31:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.650 11:31:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.650 11:31:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.650 11:31:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.650 11:31:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.650 11:31:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.650 11:31:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.650 11:31:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.650 11:31:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.650 11:31:14 -- setup/hugepages.sh@73 -- # return 0 00:03:43.650 11:31:14 -- setup/hugepages.sh@137 -- # setup output 00:03:43.650 11:31:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.650 11:31:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:46.936 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.936 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.195 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.100 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.100 11:31:19 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:49.101 11:31:19 -- setup/hugepages.sh@89 -- # local node 00:03:49.101 11:31:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.101 11:31:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.101 11:31:19 -- setup/hugepages.sh@92 -- # local surp 00:03:49.101 11:31:19 -- setup/hugepages.sh@93 -- # local resv 00:03:49.101 11:31:19 -- setup/hugepages.sh@94 -- # local anon 00:03:49.101 11:31:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.101 11:31:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.101 11:31:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.101 11:31:19 -- setup/common.sh@18 -- # local node= 00:03:49.101 11:31:19 -- setup/common.sh@19 -- # local var val 00:03:49.101 11:31:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.101 11:31:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.101 11:31:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.101 11:31:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.101 11:31:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.101 11:31:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.101 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 11:31:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44185872 kB' 'MemAvailable: 47892480 kB' 'Buffers: 4100 kB' 'Cached: 9965304 kB' 'SwapCached: 0 kB' 'Active: 6745384 kB' 'Inactive: 3693068 kB' 'Active(anon): 6350968 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472452 kB' 'Mapped: 174316 kB' 'Shmem: 5881920 kB' 'KReclaimable: 232428 kB' 'Slab: 1063976 kB' 'SReclaimable: 232428 kB' 'SUnreclaim: 831548 kB' 'KernelStack: 22048 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7524388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217756 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:49.101 11:31:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.101 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.101 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 11:31:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.363 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.363 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.364 11:31:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.364 11:31:19 -- setup/common.sh@33 -- # echo 0 00:03:49.364 11:31:19 -- setup/common.sh@33 -- # return 0 00:03:49.364 11:31:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.364 11:31:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.364 11:31:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.364 11:31:19 -- setup/common.sh@18 -- # local node= 00:03:49.364 11:31:19 -- setup/common.sh@19 -- # local var val 00:03:49.364 11:31:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.364 11:31:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.364 11:31:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.364 11:31:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.364 11:31:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.364 11:31:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.364 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44187484 kB' 'MemAvailable: 47894092 kB' 'Buffers: 4100 kB' 'Cached: 9965308 kB' 'SwapCached: 0 kB' 'Active: 6745632 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472676 kB' 'Mapped: 174272 kB' 'Shmem: 5881924 kB' 'KReclaimable: 232428 kB' 'Slab: 1064008 kB' 'SReclaimable: 232428 kB' 'SUnreclaim: 831580 kB' 'KernelStack: 21968 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7524580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.365 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.365 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.366 11:31:19 -- setup/common.sh@33 -- # echo 0 00:03:49.366 11:31:19 -- setup/common.sh@33 -- # return 0 00:03:49.366 11:31:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.366 11:31:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.366 11:31:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.366 11:31:19 -- setup/common.sh@18 -- # local node= 00:03:49.366 11:31:19 -- setup/common.sh@19 -- # local var val 00:03:49.366 11:31:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.366 11:31:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.366 11:31:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.366 11:31:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.366 11:31:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.366 11:31:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44185356 kB' 'MemAvailable: 47891964 kB' 'Buffers: 4100 kB' 'Cached: 9965324 kB' 'SwapCached: 0 kB' 'Active: 6745484 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472472 kB' 'Mapped: 174272 kB' 'Shmem: 5881940 kB' 'KReclaimable: 232428 kB' 'Slab: 1064008 kB' 'SReclaimable: 232428 kB' 'SUnreclaim: 831580 kB' 'KernelStack: 22032 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7524600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217868 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.366 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.366 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.367 11:31:19 -- setup/common.sh@33 -- # echo 0 00:03:49.367 11:31:19 -- setup/common.sh@33 -- # return 0 00:03:49.367 11:31:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.367 11:31:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.367 nr_hugepages=1024 00:03:49.367 11:31:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.367 resv_hugepages=0 00:03:49.367 11:31:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.367 surplus_hugepages=0 00:03:49.367 11:31:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.367 anon_hugepages=0 00:03:49.367 11:31:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.367 11:31:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.367 11:31:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.367 11:31:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.367 11:31:19 -- setup/common.sh@18 -- # local node= 00:03:49.367 11:31:19 -- setup/common.sh@19 -- # local var val 00:03:49.367 11:31:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.367 11:31:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.367 11:31:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.367 11:31:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.367 11:31:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.367 11:31:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44182980 kB' 'MemAvailable: 47889588 kB' 'Buffers: 4100 kB' 'Cached: 9965336 kB' 'SwapCached: 0 kB' 'Active: 6745692 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351276 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472704 kB' 'Mapped: 174272 kB' 'Shmem: 5881952 kB' 'KReclaimable: 232428 kB' 'Slab: 1064008 kB' 'SReclaimable: 232428 kB' 'SUnreclaim: 831580 kB' 'KernelStack: 22064 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7524616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.367 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.367 11:31:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.368 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.368 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.369 11:31:19 -- setup/common.sh@33 -- # echo 1024 00:03:49.369 11:31:19 -- setup/common.sh@33 -- # return 0 00:03:49.369 11:31:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.369 11:31:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.369 11:31:19 -- setup/hugepages.sh@27 -- # local node 00:03:49.369 11:31:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.369 11:31:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.369 11:31:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.369 11:31:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.369 11:31:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.369 11:31:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.369 11:31:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.369 11:31:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.369 11:31:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.369 11:31:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.369 11:31:19 -- setup/common.sh@18 -- # local node=0 00:03:49.369 11:31:19 -- setup/common.sh@19 -- # local var val 00:03:49.369 11:31:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.369 11:31:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.369 11:31:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.369 11:31:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.369 11:31:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.369 11:31:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26476580 kB' 'MemUsed: 6157856 kB' 'SwapCached: 0 kB' 'Active: 2365404 kB' 'Inactive: 163156 kB' 'Active(anon): 2201784 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2182756 kB' 'Mapped: 58840 kB' 'AnonPages: 348960 kB' 'Shmem: 1855980 kB' 'KernelStack: 12184 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103736 kB' 'Slab: 529104 kB' 'SReclaimable: 103736 kB' 'SUnreclaim: 425368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.369 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.369 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # continue 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.370 11:31:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.370 11:31:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.370 11:31:19 -- setup/common.sh@33 -- # echo 0 00:03:49.370 11:31:19 -- setup/common.sh@33 -- # return 0 00:03:49.370 11:31:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.370 11:31:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.370 11:31:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.370 11:31:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.370 11:31:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.370 node0=1024 expecting 1024 00:03:49.370 11:31:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.370 00:03:49.370 real 0m5.719s 00:03:49.370 user 0m1.478s 00:03:49.370 sys 0m2.332s 00:03:49.370 11:31:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.370 11:31:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.370 ************************************ 00:03:49.370 END TEST default_setup 00:03:49.370 ************************************ 00:03:49.370 11:31:19 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:49.370 11:31:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.370 11:31:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.370 11:31:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.370 ************************************ 00:03:49.370 START TEST per_node_1G_alloc 00:03:49.370 ************************************ 00:03:49.370 11:31:19 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:49.370 11:31:19 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:49.370 11:31:19 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:49.370 11:31:19 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:49.370 11:31:19 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:49.370 11:31:19 -- setup/hugepages.sh@51 -- # shift 00:03:49.370 11:31:19 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:49.370 11:31:19 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.370 11:31:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.370 11:31:19 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:49.370 11:31:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:49.370 11:31:19 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:49.370 11:31:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.370 11:31:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.370 11:31:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.370 11:31:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.370 11:31:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.370 11:31:19 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:49.370 11:31:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.370 11:31:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.370 11:31:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.370 11:31:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.370 11:31:19 -- setup/hugepages.sh@73 -- # return 0 00:03:49.370 11:31:19 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:49.370 11:31:19 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:49.370 11:31:19 -- setup/hugepages.sh@146 -- # setup output 00:03:49.370 11:31:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.370 11:31:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.658 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.658 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.920 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.920 11:31:23 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.920 11:31:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.920 11:31:23 -- setup/hugepages.sh@89 -- # local node 00:03:52.920 11:31:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.920 11:31:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.920 11:31:23 -- setup/hugepages.sh@92 -- # local surp 00:03:52.920 11:31:23 -- setup/hugepages.sh@93 -- # local resv 00:03:52.920 11:31:23 -- setup/hugepages.sh@94 -- # local anon 00:03:52.920 11:31:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.920 11:31:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.920 11:31:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.920 11:31:23 -- setup/common.sh@18 -- # local node= 00:03:52.920 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:52.920 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.920 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.920 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.920 11:31:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.920 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.920 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.920 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44151256 kB' 'MemAvailable: 47857868 kB' 'Buffers: 4100 kB' 'Cached: 9965432 kB' 'SwapCached: 0 kB' 'Active: 6751056 kB' 'Inactive: 3693068 kB' 'Active(anon): 6356640 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477880 kB' 'Mapped: 175292 kB' 'Shmem: 5882048 kB' 'KReclaimable: 232436 kB' 'Slab: 1064312 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 831876 kB' 'KernelStack: 21968 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7554608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217984 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.920 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.920 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.921 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.921 11:31:23 -- setup/common.sh@33 -- # echo 0 00:03:52.921 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:52.921 11:31:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.921 11:31:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.921 11:31:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.921 11:31:23 -- setup/common.sh@18 -- # local node= 00:03:52.921 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:52.921 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.921 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.921 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.921 11:31:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.921 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.921 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.921 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44156532 kB' 'MemAvailable: 47863144 kB' 'Buffers: 4100 kB' 'Cached: 9965432 kB' 'SwapCached: 0 kB' 'Active: 6746540 kB' 'Inactive: 3693068 kB' 'Active(anon): 6352124 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473368 kB' 'Mapped: 174820 kB' 'Shmem: 5882048 kB' 'KReclaimable: 232436 kB' 'Slab: 1064396 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 831960 kB' 'KernelStack: 21888 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7550148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.922 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.922 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.923 11:31:23 -- setup/common.sh@33 -- # echo 0 00:03:52.923 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:52.923 11:31:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.923 11:31:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.923 11:31:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.923 11:31:23 -- setup/common.sh@18 -- # local node= 00:03:52.923 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:52.923 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.923 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.923 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.923 11:31:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.923 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.923 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44153716 kB' 'MemAvailable: 47860328 kB' 'Buffers: 4100 kB' 'Cached: 9965448 kB' 'SwapCached: 0 kB' 'Active: 6749756 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472004 kB' 'Mapped: 174664 kB' 'Shmem: 5882064 kB' 'KReclaimable: 232436 kB' 'Slab: 1064380 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 831944 kB' 'KernelStack: 21888 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7553332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.923 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.923 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.924 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.924 11:31:23 -- setup/common.sh@33 -- # echo 0 00:03:52.924 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:52.924 11:31:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.924 11:31:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.924 nr_hugepages=1024 00:03:52.924 11:31:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.924 resv_hugepages=0 00:03:52.924 11:31:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.924 surplus_hugepages=0 00:03:52.924 11:31:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.924 anon_hugepages=0 00:03:52.924 11:31:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.924 11:31:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.924 11:31:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.924 11:31:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.924 11:31:23 -- setup/common.sh@18 -- # local node= 00:03:52.924 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:52.924 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.924 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.924 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.924 11:31:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.924 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.924 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.924 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44158100 kB' 'MemAvailable: 47864712 kB' 'Buffers: 4100 kB' 'Cached: 9965468 kB' 'SwapCached: 0 kB' 'Active: 6744780 kB' 'Inactive: 3693068 kB' 'Active(anon): 6350364 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471628 kB' 'Mapped: 174332 kB' 'Shmem: 5882084 kB' 'KReclaimable: 232436 kB' 'Slab: 1064380 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 831944 kB' 'KernelStack: 21904 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7548300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.925 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.925 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.926 11:31:23 -- setup/common.sh@33 -- # echo 1024 00:03:52.926 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:52.926 11:31:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.926 11:31:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.926 11:31:23 -- setup/hugepages.sh@27 -- # local node 00:03:52.926 11:31:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.926 11:31:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.926 11:31:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.926 11:31:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.926 11:31:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.926 11:31:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.926 11:31:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.926 11:31:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.926 11:31:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.926 11:31:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.926 11:31:23 -- setup/common.sh@18 -- # local node=0 00:03:52.926 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:52.926 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.926 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.926 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.926 11:31:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.926 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.926 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27506524 kB' 'MemUsed: 5127912 kB' 'SwapCached: 0 kB' 'Active: 2364072 kB' 'Inactive: 163156 kB' 'Active(anon): 2200452 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2182804 kB' 'Mapped: 59036 kB' 'AnonPages: 347692 kB' 'Shmem: 1856028 kB' 'KernelStack: 12072 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103744 kB' 'Slab: 529296 kB' 'SReclaimable: 103744 kB' 'SUnreclaim: 425552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.926 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.926 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.186 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.186 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@33 -- # echo 0 00:03:53.187 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:53.187 11:31:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.187 11:31:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.187 11:31:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.187 11:31:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.187 11:31:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.187 11:31:23 -- setup/common.sh@18 -- # local node=1 00:03:53.187 11:31:23 -- setup/common.sh@19 -- # local var val 00:03:53.187 11:31:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.187 11:31:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.187 11:31:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.187 11:31:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.187 11:31:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.187 11:31:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649352 kB' 'MemFree: 16651280 kB' 'MemUsed: 10998072 kB' 'SwapCached: 0 kB' 'Active: 4381572 kB' 'Inactive: 3529912 kB' 'Active(anon): 4150776 kB' 'Inactive(anon): 0 kB' 'Active(file): 230796 kB' 'Inactive(file): 3529912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7786780 kB' 'Mapped: 115296 kB' 'AnonPages: 124828 kB' 'Shmem: 4026072 kB' 'KernelStack: 9896 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128692 kB' 'Slab: 535084 kB' 'SReclaimable: 128692 kB' 'SUnreclaim: 406392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.187 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.187 11:31:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # continue 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.188 11:31:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.188 11:31:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.188 11:31:23 -- setup/common.sh@33 -- # echo 0 00:03:53.188 11:31:23 -- setup/common.sh@33 -- # return 0 00:03:53.188 11:31:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.188 11:31:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.188 11:31:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.188 11:31:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.188 node0=512 expecting 512 00:03:53.188 11:31:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.188 11:31:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.188 11:31:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.188 11:31:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:53.188 node1=512 expecting 512 00:03:53.188 11:31:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.188 00:03:53.188 real 0m3.639s 00:03:53.188 user 0m1.413s 00:03:53.188 sys 0m2.293s 00:03:53.188 11:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:53.188 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:03:53.188 ************************************ 00:03:53.188 END TEST per_node_1G_alloc 00:03:53.188 ************************************ 00:03:53.188 11:31:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:53.188 11:31:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.188 11:31:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.188 11:31:23 -- common/autotest_common.sh@10 -- # set +x 00:03:53.188 ************************************ 00:03:53.188 START TEST even_2G_alloc 00:03:53.188 ************************************ 00:03:53.188 11:31:23 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:53.188 11:31:23 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:53.188 11:31:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.188 11:31:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.188 11:31:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.188 11:31:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.188 11:31:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.188 11:31:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.188 11:31:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.188 11:31:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.188 11:31:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.188 11:31:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.188 11:31:23 -- setup/hugepages.sh@83 -- # : 512 00:03:53.188 11:31:23 -- setup/hugepages.sh@84 -- # : 1 00:03:53.188 11:31:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.188 11:31:23 -- setup/hugepages.sh@83 -- # : 0 00:03:53.188 11:31:23 -- setup/hugepages.sh@84 -- # : 0 00:03:53.188 11:31:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.188 11:31:23 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:53.188 11:31:23 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:53.188 11:31:23 -- setup/hugepages.sh@153 -- # setup output 00:03:53.188 11:31:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.188 11:31:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:56.476 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.476 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.477 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.477 11:31:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:56.477 11:31:26 -- setup/hugepages.sh@89 -- # local node 00:03:56.477 11:31:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.477 11:31:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.477 11:31:26 -- setup/hugepages.sh@92 -- # local surp 00:03:56.477 11:31:26 -- setup/hugepages.sh@93 -- # local resv 00:03:56.477 11:31:26 -- setup/hugepages.sh@94 -- # local anon 00:03:56.477 11:31:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.477 11:31:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.477 11:31:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.477 11:31:26 -- setup/common.sh@18 -- # local node= 00:03:56.477 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.477 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.477 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.477 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.477 11:31:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.477 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.477 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44138088 kB' 'MemAvailable: 47844700 kB' 'Buffers: 4100 kB' 'Cached: 9965560 kB' 'SwapCached: 0 kB' 'Active: 6746228 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351812 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472936 kB' 'Mapped: 174388 kB' 'Shmem: 5882176 kB' 'KReclaimable: 232436 kB' 'Slab: 1064828 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832392 kB' 'KernelStack: 22016 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7549132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218092 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.477 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.477 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.478 11:31:26 -- setup/common.sh@33 -- # echo 0 00:03:56.478 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.478 11:31:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.478 11:31:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.478 11:31:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.478 11:31:26 -- setup/common.sh@18 -- # local node= 00:03:56.478 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.478 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.478 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.478 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.478 11:31:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.478 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.478 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44139516 kB' 'MemAvailable: 47846128 kB' 'Buffers: 4100 kB' 'Cached: 9965560 kB' 'SwapCached: 0 kB' 'Active: 6746516 kB' 'Inactive: 3693068 kB' 'Active(anon): 6352100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473268 kB' 'Mapped: 174388 kB' 'Shmem: 5882176 kB' 'KReclaimable: 232436 kB' 'Slab: 1064844 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832408 kB' 'KernelStack: 21984 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7549144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.478 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.478 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.479 11:31:26 -- setup/common.sh@33 -- # echo 0 00:03:56.479 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.479 11:31:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.479 11:31:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.479 11:31:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.479 11:31:26 -- setup/common.sh@18 -- # local node= 00:03:56.479 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.479 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.479 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.479 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.479 11:31:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.479 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.479 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44140412 kB' 'MemAvailable: 47847024 kB' 'Buffers: 4100 kB' 'Cached: 9965572 kB' 'SwapCached: 0 kB' 'Active: 6745836 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351420 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472560 kB' 'Mapped: 174348 kB' 'Shmem: 5882188 kB' 'KReclaimable: 232436 kB' 'Slab: 1064896 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832460 kB' 'KernelStack: 21968 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7549160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.479 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.479 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.480 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.480 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.481 11:31:26 -- setup/common.sh@33 -- # echo 0 00:03:56.481 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.481 11:31:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.481 11:31:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.481 nr_hugepages=1024 00:03:56.481 11:31:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.481 resv_hugepages=0 00:03:56.481 11:31:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.481 surplus_hugepages=0 00:03:56.481 11:31:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.481 anon_hugepages=0 00:03:56.481 11:31:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.481 11:31:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.481 11:31:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.481 11:31:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.481 11:31:26 -- setup/common.sh@18 -- # local node= 00:03:56.481 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.481 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.481 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.481 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.481 11:31:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.481 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.481 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44140412 kB' 'MemAvailable: 47847024 kB' 'Buffers: 4100 kB' 'Cached: 9965572 kB' 'SwapCached: 0 kB' 'Active: 6745836 kB' 'Inactive: 3693068 kB' 'Active(anon): 6351420 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472560 kB' 'Mapped: 174348 kB' 'Shmem: 5882188 kB' 'KReclaimable: 232436 kB' 'Slab: 1064896 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832460 kB' 'KernelStack: 21968 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7549172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.481 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.481 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.482 11:31:26 -- setup/common.sh@33 -- # echo 1024 00:03:56.482 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.482 11:31:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.482 11:31:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.482 11:31:26 -- setup/hugepages.sh@27 -- # local node 00:03:56.482 11:31:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.482 11:31:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.482 11:31:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.482 11:31:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.482 11:31:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.482 11:31:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.482 11:31:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.482 11:31:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.482 11:31:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.482 11:31:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.482 11:31:26 -- setup/common.sh@18 -- # local node=0 00:03:56.482 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.482 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.482 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.482 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.482 11:31:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.482 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.482 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.482 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.482 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27478736 kB' 'MemUsed: 5155700 kB' 'SwapCached: 0 kB' 'Active: 2364028 kB' 'Inactive: 163156 kB' 'Active(anon): 2200408 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2182888 kB' 'Mapped: 59052 kB' 'AnonPages: 347476 kB' 'Shmem: 1856112 kB' 'KernelStack: 12072 kB' 'PageTables: 4912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103744 kB' 'Slab: 529636 kB' 'SReclaimable: 103744 kB' 'SUnreclaim: 425892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.482 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.483 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.483 11:31:26 -- setup/common.sh@33 -- # echo 0 00:03:56.483 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.483 11:31:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.483 11:31:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.483 11:31:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.483 11:31:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.483 11:31:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.483 11:31:26 -- setup/common.sh@18 -- # local node=1 00:03:56.483 11:31:26 -- setup/common.sh@19 -- # local var val 00:03:56.483 11:31:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.483 11:31:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.483 11:31:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.483 11:31:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.483 11:31:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.483 11:31:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.483 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649352 kB' 'MemFree: 16662284 kB' 'MemUsed: 10987068 kB' 'SwapCached: 0 kB' 'Active: 4381860 kB' 'Inactive: 3529912 kB' 'Active(anon): 4151064 kB' 'Inactive(anon): 0 kB' 'Active(file): 230796 kB' 'Inactive(file): 3529912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7786816 kB' 'Mapped: 115296 kB' 'AnonPages: 125092 kB' 'Shmem: 4026108 kB' 'KernelStack: 9896 kB' 'PageTables: 2856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128692 kB' 'Slab: 535260 kB' 'SReclaimable: 128692 kB' 'SUnreclaim: 406568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # continue 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.484 11:31:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.484 11:31:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.484 11:31:26 -- setup/common.sh@33 -- # echo 0 00:03:56.485 11:31:26 -- setup/common.sh@33 -- # return 0 00:03:56.485 11:31:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.485 11:31:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.485 11:31:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.485 11:31:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.485 node0=512 expecting 512 00:03:56.485 11:31:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.485 11:31:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.485 11:31:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.485 11:31:26 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.485 node1=512 expecting 512 00:03:56.485 11:31:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.485 00:03:56.485 real 0m3.066s 00:03:56.485 user 0m1.070s 00:03:56.485 sys 0m1.990s 00:03:56.485 11:31:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.485 11:31:26 -- common/autotest_common.sh@10 -- # set +x 00:03:56.485 ************************************ 00:03:56.485 END TEST even_2G_alloc 00:03:56.485 ************************************ 00:03:56.485 11:31:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.485 11:31:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.485 11:31:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.485 11:31:26 -- common/autotest_common.sh@10 -- # set +x 00:03:56.485 ************************************ 00:03:56.485 START TEST odd_alloc 00:03:56.485 ************************************ 00:03:56.485 11:31:26 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:56.485 11:31:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.485 11:31:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.485 11:31:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.485 11:31:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.485 11:31:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.485 11:31:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.485 11:31:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.485 11:31:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.485 11:31:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.485 11:31:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.485 11:31:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.485 11:31:26 -- setup/hugepages.sh@83 -- # : 513 00:03:56.485 11:31:26 -- setup/hugepages.sh@84 -- # : 1 00:03:56.485 11:31:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.485 11:31:26 -- setup/hugepages.sh@83 -- # : 0 00:03:56.485 11:31:26 -- setup/hugepages.sh@84 -- # : 0 00:03:56.485 11:31:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.485 11:31:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.485 11:31:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.485 11:31:26 -- setup/hugepages.sh@160 -- # setup output 00:03:56.485 11:31:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.485 11:31:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.771 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.771 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.771 11:31:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.771 11:31:30 -- setup/hugepages.sh@89 -- # local node 00:03:59.772 11:31:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.772 11:31:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.772 11:31:30 -- setup/hugepages.sh@92 -- # local surp 00:03:59.772 11:31:30 -- setup/hugepages.sh@93 -- # local resv 00:03:59.772 11:31:30 -- setup/hugepages.sh@94 -- # local anon 00:03:59.772 11:31:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.772 11:31:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.772 11:31:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.772 11:31:30 -- setup/common.sh@18 -- # local node= 00:03:59.772 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.772 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.772 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.772 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.772 11:31:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.772 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.772 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44142832 kB' 'MemAvailable: 47849444 kB' 'Buffers: 4100 kB' 'Cached: 9965688 kB' 'SwapCached: 0 kB' 'Active: 6749620 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476768 kB' 'Mapped: 174316 kB' 'Shmem: 5882304 kB' 'KReclaimable: 232436 kB' 'Slab: 1065296 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832860 kB' 'KernelStack: 22064 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 7549784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.772 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.772 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.773 11:31:30 -- setup/common.sh@33 -- # echo 0 00:03:59.773 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.773 11:31:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.773 11:31:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.773 11:31:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.773 11:31:30 -- setup/common.sh@18 -- # local node= 00:03:59.773 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.773 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.773 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.773 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.773 11:31:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.773 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.773 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44143500 kB' 'MemAvailable: 47850112 kB' 'Buffers: 4100 kB' 'Cached: 9965692 kB' 'SwapCached: 0 kB' 'Active: 6750284 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477428 kB' 'Mapped: 174300 kB' 'Shmem: 5882308 kB' 'KReclaimable: 232436 kB' 'Slab: 1065344 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832908 kB' 'KernelStack: 22064 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 7549796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.773 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.773 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.774 11:31:30 -- setup/common.sh@33 -- # echo 0 00:03:59.774 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.774 11:31:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.774 11:31:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.774 11:31:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.774 11:31:30 -- setup/common.sh@18 -- # local node= 00:03:59.774 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.774 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.774 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.774 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.774 11:31:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.774 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.774 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44144456 kB' 'MemAvailable: 47851068 kB' 'Buffers: 4100 kB' 'Cached: 9965704 kB' 'SwapCached: 0 kB' 'Active: 6750444 kB' 'Inactive: 3693068 kB' 'Active(anon): 6356028 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477604 kB' 'Mapped: 174300 kB' 'Shmem: 5882320 kB' 'KReclaimable: 232436 kB' 'Slab: 1065344 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832908 kB' 'KernelStack: 22064 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 7549812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.774 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.774 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.775 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.775 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.776 11:31:30 -- setup/common.sh@33 -- # echo 0 00:03:59.776 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.776 11:31:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.776 11:31:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:59.776 nr_hugepages=1025 00:03:59.776 11:31:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.776 resv_hugepages=0 00:03:59.776 11:31:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.776 surplus_hugepages=0 00:03:59.776 11:31:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.776 anon_hugepages=0 00:03:59.776 11:31:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.776 11:31:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:59.776 11:31:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.776 11:31:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.776 11:31:30 -- setup/common.sh@18 -- # local node= 00:03:59.776 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.776 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.776 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.776 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.776 11:31:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.776 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.776 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44144168 kB' 'MemAvailable: 47850780 kB' 'Buffers: 4100 kB' 'Cached: 9965728 kB' 'SwapCached: 0 kB' 'Active: 6750368 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355952 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477440 kB' 'Mapped: 174300 kB' 'Shmem: 5882344 kB' 'KReclaimable: 232436 kB' 'Slab: 1065344 kB' 'SReclaimable: 232436 kB' 'SUnreclaim: 832908 kB' 'KernelStack: 22048 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480896 kB' 'Committed_AS: 7549824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.776 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.776 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.777 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.777 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.777 11:31:30 -- setup/common.sh@33 -- # echo 1025 00:03:59.777 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.777 11:31:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.777 11:31:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.777 11:31:30 -- setup/hugepages.sh@27 -- # local node 00:03:59.777 11:31:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.777 11:31:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.777 11:31:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.777 11:31:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:59.777 11:31:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.777 11:31:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.777 11:31:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.777 11:31:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.777 11:31:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.777 11:31:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.777 11:31:30 -- setup/common.sh@18 -- # local node=0 00:03:59.777 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.777 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.777 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.777 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.777 11:31:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.777 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.777 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.777 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27492704 kB' 'MemUsed: 5141732 kB' 'SwapCached: 0 kB' 'Active: 2366304 kB' 'Inactive: 163156 kB' 'Active(anon): 2202684 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2182928 kB' 'Mapped: 59560 kB' 'AnonPages: 350248 kB' 'Shmem: 1856152 kB' 'KernelStack: 12136 kB' 'PageTables: 5144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103744 kB' 'Slab: 529976 kB' 'SReclaimable: 103744 kB' 'SUnreclaim: 426232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.778 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.778 11:31:30 -- setup/common.sh@33 -- # echo 0 00:03:59.778 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.778 11:31:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.778 11:31:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.778 11:31:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.778 11:31:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.778 11:31:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.778 11:31:30 -- setup/common.sh@18 -- # local node=1 00:03:59.778 11:31:30 -- setup/common.sh@19 -- # local var val 00:03:59.778 11:31:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.778 11:31:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.778 11:31:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.778 11:31:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.778 11:31:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.778 11:31:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.778 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649352 kB' 'MemFree: 16645296 kB' 'MemUsed: 11004056 kB' 'SwapCached: 0 kB' 'Active: 4389120 kB' 'Inactive: 3529912 kB' 'Active(anon): 4158324 kB' 'Inactive(anon): 0 kB' 'Active(file): 230796 kB' 'Inactive(file): 3529912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7786916 kB' 'Mapped: 115448 kB' 'AnonPages: 132304 kB' 'Shmem: 4026208 kB' 'KernelStack: 9896 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128692 kB' 'Slab: 535368 kB' 'SReclaimable: 128692 kB' 'SUnreclaim: 406676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # continue 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.779 11:31:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.779 11:31:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.779 11:31:30 -- setup/common.sh@33 -- # echo 0 00:03:59.779 11:31:30 -- setup/common.sh@33 -- # return 0 00:03:59.779 11:31:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.779 11:31:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.779 11:31:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.779 11:31:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.779 11:31:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:59.779 node0=512 expecting 513 00:03:59.780 11:31:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.780 11:31:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.780 11:31:30 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:59.780 node1=513 expecting 512 00:03:59.780 11:31:30 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:59.780 00:03:59.780 real 0m3.505s 00:03:59.780 user 0m1.355s 00:03:59.780 sys 0m2.217s 00:03:59.780 11:31:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.780 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:03:59.780 ************************************ 00:03:59.780 END TEST odd_alloc 00:03:59.780 ************************************ 00:03:59.780 11:31:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:59.780 11:31:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.780 11:31:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.780 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:03:59.780 ************************************ 00:03:59.780 START TEST custom_alloc 00:03:59.780 ************************************ 00:03:59.780 11:31:30 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:59.780 11:31:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:59.780 11:31:30 -- setup/hugepages.sh@169 -- # local node 00:03:59.780 11:31:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:59.780 11:31:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:59.780 11:31:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:59.780 11:31:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.780 11:31:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.780 11:31:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.780 11:31:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.780 11:31:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.780 11:31:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.780 11:31:30 -- setup/hugepages.sh@83 -- # : 256 00:03:59.780 11:31:30 -- setup/hugepages.sh@84 -- # : 1 00:03:59.780 11:31:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.780 11:31:30 -- setup/hugepages.sh@83 -- # : 0 00:03:59.780 11:31:30 -- setup/hugepages.sh@84 -- # : 0 00:03:59.780 11:31:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:59.780 11:31:30 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:59.780 11:31:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.780 11:31:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.780 11:31:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.780 11:31:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.780 11:31:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.780 11:31:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.780 11:31:30 -- setup/hugepages.sh@78 -- # return 0 00:03:59.780 11:31:30 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:59.780 11:31:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.780 11:31:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.780 11:31:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.780 11:31:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.780 11:31:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.780 11:31:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.780 11:31:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:59.780 11:31:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.780 11:31:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.780 11:31:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:59.780 11:31:30 -- setup/hugepages.sh@78 -- # return 0 00:03:59.780 11:31:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:59.780 11:31:30 -- setup/hugepages.sh@187 -- # setup output 00:03:59.780 11:31:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.780 11:31:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:03.065 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.065 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.065 11:31:33 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.065 11:31:33 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.065 11:31:33 -- setup/hugepages.sh@89 -- # local node 00:04:03.065 11:31:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.065 11:31:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.065 11:31:33 -- setup/hugepages.sh@92 -- # local surp 00:04:03.065 11:31:33 -- setup/hugepages.sh@93 -- # local resv 00:04:03.065 11:31:33 -- setup/hugepages.sh@94 -- # local anon 00:04:03.065 11:31:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.065 11:31:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.065 11:31:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.065 11:31:33 -- setup/common.sh@18 -- # local node= 00:04:03.065 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.065 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.065 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.065 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.065 11:31:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.065 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.065 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 43102736 kB' 'MemAvailable: 46809332 kB' 'Buffers: 4100 kB' 'Cached: 9965820 kB' 'SwapCached: 0 kB' 'Active: 6747424 kB' 'Inactive: 3693068 kB' 'Active(anon): 6353008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473916 kB' 'Mapped: 173328 kB' 'Shmem: 5882436 kB' 'KReclaimable: 232404 kB' 'Slab: 1065052 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832648 kB' 'KernelStack: 21936 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 7516056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217868 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.065 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.065 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.066 11:31:33 -- setup/common.sh@33 -- # echo 0 00:04:03.066 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.066 11:31:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.066 11:31:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.066 11:31:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.066 11:31:33 -- setup/common.sh@18 -- # local node= 00:04:03.066 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.066 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.066 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.066 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.066 11:31:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.066 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.066 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 43111308 kB' 'MemAvailable: 46817904 kB' 'Buffers: 4100 kB' 'Cached: 9965824 kB' 'SwapCached: 0 kB' 'Active: 6747496 kB' 'Inactive: 3693068 kB' 'Active(anon): 6353080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474040 kB' 'Mapped: 173328 kB' 'Shmem: 5882440 kB' 'KReclaimable: 232404 kB' 'Slab: 1065020 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832616 kB' 'KernelStack: 21904 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 7516068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.066 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.066 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.067 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.067 11:31:33 -- setup/common.sh@33 -- # echo 0 00:04:03.067 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.067 11:31:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.067 11:31:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.067 11:31:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.067 11:31:33 -- setup/common.sh@18 -- # local node= 00:04:03.067 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.067 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.067 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.067 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.067 11:31:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.067 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.067 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.067 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 43112200 kB' 'MemAvailable: 46818796 kB' 'Buffers: 4100 kB' 'Cached: 9965824 kB' 'SwapCached: 0 kB' 'Active: 6746872 kB' 'Inactive: 3693068 kB' 'Active(anon): 6352456 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473404 kB' 'Mapped: 173288 kB' 'Shmem: 5882440 kB' 'KReclaimable: 232404 kB' 'Slab: 1065060 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832656 kB' 'KernelStack: 21904 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 7516084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.068 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.068 11:31:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.069 11:31:33 -- setup/common.sh@33 -- # echo 0 00:04:03.069 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.069 11:31:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.069 11:31:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.069 nr_hugepages=1536 00:04:03.069 11:31:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.069 resv_hugepages=0 00:04:03.069 11:31:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.069 surplus_hugepages=0 00:04:03.069 11:31:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.069 anon_hugepages=0 00:04:03.069 11:31:33 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.069 11:31:33 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.069 11:31:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.069 11:31:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.069 11:31:33 -- setup/common.sh@18 -- # local node= 00:04:03.069 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.069 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.069 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.069 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.069 11:31:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.069 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.069 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 43111940 kB' 'MemAvailable: 46818536 kB' 'Buffers: 4100 kB' 'Cached: 9965860 kB' 'SwapCached: 0 kB' 'Active: 6746548 kB' 'Inactive: 3693068 kB' 'Active(anon): 6352132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473004 kB' 'Mapped: 173288 kB' 'Shmem: 5882476 kB' 'KReclaimable: 232404 kB' 'Slab: 1065060 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832656 kB' 'KernelStack: 21888 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957632 kB' 'Committed_AS: 7516096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.069 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.069 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.070 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.070 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.070 11:31:33 -- setup/common.sh@33 -- # echo 1536 00:04:03.070 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.070 11:31:33 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.070 11:31:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.070 11:31:33 -- setup/hugepages.sh@27 -- # local node 00:04:03.070 11:31:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.070 11:31:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.070 11:31:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.070 11:31:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.070 11:31:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.071 11:31:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.071 11:31:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.071 11:31:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.071 11:31:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.071 11:31:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.071 11:31:33 -- setup/common.sh@18 -- # local node=0 00:04:03.071 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.071 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.071 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.071 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.071 11:31:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.071 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.071 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.071 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27523652 kB' 'MemUsed: 5110784 kB' 'SwapCached: 0 kB' 'Active: 2365024 kB' 'Inactive: 163156 kB' 'Active(anon): 2201404 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2182984 kB' 'Mapped: 58228 kB' 'AnonPages: 348472 kB' 'Shmem: 1856208 kB' 'KernelStack: 12104 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103712 kB' 'Slab: 529872 kB' 'SReclaimable: 103712 kB' 'SUnreclaim: 426160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.071 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.071 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@33 -- # echo 0 00:04:03.072 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.072 11:31:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.072 11:31:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.072 11:31:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.072 11:31:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.072 11:31:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.072 11:31:33 -- setup/common.sh@18 -- # local node=1 00:04:03.072 11:31:33 -- setup/common.sh@19 -- # local var val 00:04:03.072 11:31:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.072 11:31:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.072 11:31:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.072 11:31:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.072 11:31:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.072 11:31:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649352 kB' 'MemFree: 15589856 kB' 'MemUsed: 12059496 kB' 'SwapCached: 0 kB' 'Active: 4382160 kB' 'Inactive: 3529912 kB' 'Active(anon): 4151364 kB' 'Inactive(anon): 0 kB' 'Active(file): 230796 kB' 'Inactive(file): 3529912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7786980 kB' 'Mapped: 115060 kB' 'AnonPages: 125244 kB' 'Shmem: 4026272 kB' 'KernelStack: 9832 kB' 'PageTables: 2748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128692 kB' 'Slab: 535188 kB' 'SReclaimable: 128692 kB' 'SUnreclaim: 406496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.072 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.072 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # continue 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.073 11:31:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.073 11:31:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.073 11:31:33 -- setup/common.sh@33 -- # echo 0 00:04:03.073 11:31:33 -- setup/common.sh@33 -- # return 0 00:04:03.073 11:31:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.073 11:31:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.073 11:31:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.073 11:31:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.073 11:31:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.073 node0=512 expecting 512 00:04:03.073 11:31:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.073 11:31:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.073 11:31:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.073 11:31:33 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.073 node1=1024 expecting 1024 00:04:03.073 11:31:33 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.073 00:04:03.073 real 0m3.074s 00:04:03.073 user 0m0.976s 00:04:03.073 sys 0m1.928s 00:04:03.073 11:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.073 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:03.073 ************************************ 00:04:03.073 END TEST custom_alloc 00:04:03.073 ************************************ 00:04:03.073 11:31:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.073 11:31:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.073 11:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.073 11:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:03.073 ************************************ 00:04:03.073 START TEST no_shrink_alloc 00:04:03.073 ************************************ 00:04:03.073 11:31:33 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:03.073 11:31:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.073 11:31:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.073 11:31:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.073 11:31:33 -- setup/hugepages.sh@51 -- # shift 00:04:03.073 11:31:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.073 11:31:33 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.073 11:31:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.073 11:31:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.073 11:31:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.073 11:31:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.073 11:31:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.073 11:31:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.073 11:31:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.073 11:31:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.073 11:31:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.073 11:31:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.073 11:31:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.073 11:31:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.073 11:31:33 -- setup/hugepages.sh@73 -- # return 0 00:04:03.073 11:31:33 -- setup/hugepages.sh@198 -- # setup output 00:04:03.073 11:31:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.073 11:31:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:06.364 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.364 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.364 11:31:36 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.364 11:31:36 -- setup/hugepages.sh@89 -- # local node 00:04:06.364 11:31:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.364 11:31:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.364 11:31:36 -- setup/hugepages.sh@92 -- # local surp 00:04:06.364 11:31:36 -- setup/hugepages.sh@93 -- # local resv 00:04:06.364 11:31:36 -- setup/hugepages.sh@94 -- # local anon 00:04:06.364 11:31:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.364 11:31:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.364 11:31:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.364 11:31:36 -- setup/common.sh@18 -- # local node= 00:04:06.364 11:31:36 -- setup/common.sh@19 -- # local var val 00:04:06.364 11:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.364 11:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.364 11:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.364 11:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.364 11:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.364 11:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44145512 kB' 'MemAvailable: 47852108 kB' 'Buffers: 4100 kB' 'Cached: 9965948 kB' 'SwapCached: 0 kB' 'Active: 6749516 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475452 kB' 'Mapped: 173408 kB' 'Shmem: 5882564 kB' 'KReclaimable: 232404 kB' 'Slab: 1064520 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832116 kB' 'KernelStack: 21936 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.364 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.364 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.365 11:31:36 -- setup/common.sh@33 -- # echo 0 00:04:06.365 11:31:36 -- setup/common.sh@33 -- # return 0 00:04:06.365 11:31:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.365 11:31:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.365 11:31:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.365 11:31:36 -- setup/common.sh@18 -- # local node= 00:04:06.365 11:31:36 -- setup/common.sh@19 -- # local var val 00:04:06.365 11:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.365 11:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.365 11:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.365 11:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.365 11:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.365 11:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44145156 kB' 'MemAvailable: 47851752 kB' 'Buffers: 4100 kB' 'Cached: 9965952 kB' 'SwapCached: 0 kB' 'Active: 6748844 kB' 'Inactive: 3693068 kB' 'Active(anon): 6354428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475212 kB' 'Mapped: 173292 kB' 'Shmem: 5882568 kB' 'KReclaimable: 232404 kB' 'Slab: 1064488 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832084 kB' 'KernelStack: 22144 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.365 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.365 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.366 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.366 11:31:36 -- setup/common.sh@33 -- # echo 0 00:04:06.366 11:31:36 -- setup/common.sh@33 -- # return 0 00:04:06.366 11:31:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.366 11:31:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.366 11:31:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.366 11:31:36 -- setup/common.sh@18 -- # local node= 00:04:06.366 11:31:36 -- setup/common.sh@19 -- # local var val 00:04:06.366 11:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.366 11:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.366 11:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.366 11:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.366 11:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.366 11:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.366 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44146316 kB' 'MemAvailable: 47852912 kB' 'Buffers: 4100 kB' 'Cached: 9965964 kB' 'SwapCached: 0 kB' 'Active: 6748496 kB' 'Inactive: 3693068 kB' 'Active(anon): 6354080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474740 kB' 'Mapped: 173292 kB' 'Shmem: 5882580 kB' 'KReclaimable: 232404 kB' 'Slab: 1064480 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832076 kB' 'KernelStack: 22016 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.367 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.367 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.368 11:31:36 -- setup/common.sh@33 -- # echo 0 00:04:06.368 11:31:36 -- setup/common.sh@33 -- # return 0 00:04:06.368 11:31:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.368 11:31:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.368 nr_hugepages=1024 00:04:06.368 11:31:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.368 resv_hugepages=0 00:04:06.368 11:31:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.368 surplus_hugepages=0 00:04:06.368 11:31:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.368 anon_hugepages=0 00:04:06.368 11:31:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.368 11:31:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.368 11:31:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.368 11:31:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.368 11:31:36 -- setup/common.sh@18 -- # local node= 00:04:06.368 11:31:36 -- setup/common.sh@19 -- # local var val 00:04:06.368 11:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.368 11:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.368 11:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.368 11:31:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.368 11:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.368 11:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44145036 kB' 'MemAvailable: 47851632 kB' 'Buffers: 4100 kB' 'Cached: 9965976 kB' 'SwapCached: 0 kB' 'Active: 6748428 kB' 'Inactive: 3693068 kB' 'Active(anon): 6354012 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474608 kB' 'Mapped: 173292 kB' 'Shmem: 5882592 kB' 'KReclaimable: 232404 kB' 'Slab: 1064480 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 832076 kB' 'KernelStack: 21984 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.368 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.368 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.369 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.369 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.370 11:31:36 -- setup/common.sh@33 -- # echo 1024 00:04:06.370 11:31:36 -- setup/common.sh@33 -- # return 0 00:04:06.370 11:31:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.370 11:31:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.370 11:31:36 -- setup/hugepages.sh@27 -- # local node 00:04:06.370 11:31:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.370 11:31:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.370 11:31:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.370 11:31:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.370 11:31:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.370 11:31:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.370 11:31:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.370 11:31:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.370 11:31:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.370 11:31:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.370 11:31:36 -- setup/common.sh@18 -- # local node=0 00:04:06.370 11:31:36 -- setup/common.sh@19 -- # local var val 00:04:06.370 11:31:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.370 11:31:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.370 11:31:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.370 11:31:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.370 11:31:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.370 11:31:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26475284 kB' 'MemUsed: 6159152 kB' 'SwapCached: 0 kB' 'Active: 2365108 kB' 'Inactive: 163156 kB' 'Active(anon): 2201488 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2183020 kB' 'Mapped: 58232 kB' 'AnonPages: 348316 kB' 'Shmem: 1856244 kB' 'KernelStack: 12184 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103712 kB' 'Slab: 529360 kB' 'SReclaimable: 103712 kB' 'SUnreclaim: 425648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.370 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.370 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # continue 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.371 11:31:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.371 11:31:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.371 11:31:36 -- setup/common.sh@33 -- # echo 0 00:04:06.371 11:31:36 -- setup/common.sh@33 -- # return 0 00:04:06.371 11:31:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.371 11:31:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.371 11:31:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.371 11:31:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.371 11:31:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.371 node0=1024 expecting 1024 00:04:06.371 11:31:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.371 11:31:36 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:06.371 11:31:36 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:06.371 11:31:36 -- setup/hugepages.sh@202 -- # setup output 00:04:06.371 11:31:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.371 11:31:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:09.658 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.658 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.658 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.658 11:31:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.658 11:31:40 -- setup/hugepages.sh@89 -- # local node 00:04:09.658 11:31:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.658 11:31:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.658 11:31:40 -- setup/hugepages.sh@92 -- # local surp 00:04:09.658 11:31:40 -- setup/hugepages.sh@93 -- # local resv 00:04:09.658 11:31:40 -- setup/hugepages.sh@94 -- # local anon 00:04:09.658 11:31:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.658 11:31:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.658 11:31:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.658 11:31:40 -- setup/common.sh@18 -- # local node= 00:04:09.658 11:31:40 -- setup/common.sh@19 -- # local var val 00:04:09.658 11:31:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.658 11:31:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.658 11:31:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.658 11:31:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.658 11:31:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.658 11:31:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44145616 kB' 'MemAvailable: 47852212 kB' 'Buffers: 4100 kB' 'Cached: 9966068 kB' 'SwapCached: 0 kB' 'Active: 6751944 kB' 'Inactive: 3693068 kB' 'Active(anon): 6357528 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478148 kB' 'Mapped: 173340 kB' 'Shmem: 5882684 kB' 'KReclaimable: 232404 kB' 'Slab: 1064196 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 831792 kB' 'KernelStack: 22048 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.658 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.658 11:31:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 11:31:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.659 11:31:40 -- setup/common.sh@33 -- # echo 0 00:04:09.659 11:31:40 -- setup/common.sh@33 -- # return 0 00:04:09.659 11:31:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.659 11:31:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.659 11:31:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.659 11:31:40 -- setup/common.sh@18 -- # local node= 00:04:09.659 11:31:40 -- setup/common.sh@19 -- # local var val 00:04:09.659 11:31:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.659 11:31:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.659 11:31:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.659 11:31:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.659 11:31:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.659 11:31:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44147172 kB' 'MemAvailable: 47853768 kB' 'Buffers: 4100 kB' 'Cached: 9966076 kB' 'SwapCached: 0 kB' 'Active: 6751024 kB' 'Inactive: 3693068 kB' 'Active(anon): 6356608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477232 kB' 'Mapped: 173300 kB' 'Shmem: 5882692 kB' 'KReclaimable: 232404 kB' 'Slab: 1064356 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 831952 kB' 'KernelStack: 21920 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7521908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.921 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.921 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.922 11:31:40 -- setup/common.sh@33 -- # echo 0 00:04:09.922 11:31:40 -- setup/common.sh@33 -- # return 0 00:04:09.922 11:31:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.922 11:31:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.922 11:31:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.922 11:31:40 -- setup/common.sh@18 -- # local node= 00:04:09.922 11:31:40 -- setup/common.sh@19 -- # local var val 00:04:09.922 11:31:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.922 11:31:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.922 11:31:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.922 11:31:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.922 11:31:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.922 11:31:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44147424 kB' 'MemAvailable: 47854020 kB' 'Buffers: 4100 kB' 'Cached: 9966088 kB' 'SwapCached: 0 kB' 'Active: 6750652 kB' 'Inactive: 3693068 kB' 'Active(anon): 6356236 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476876 kB' 'Mapped: 173300 kB' 'Shmem: 5882704 kB' 'KReclaimable: 232404 kB' 'Slab: 1064356 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 831952 kB' 'KernelStack: 21920 kB' 'PageTables: 7384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7517376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.922 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.922 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.923 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.923 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.924 11:31:40 -- setup/common.sh@33 -- # echo 0 00:04:09.924 11:31:40 -- setup/common.sh@33 -- # return 0 00:04:09.924 11:31:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.924 11:31:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.924 nr_hugepages=1024 00:04:09.924 11:31:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.924 resv_hugepages=0 00:04:09.924 11:31:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.924 surplus_hugepages=0 00:04:09.924 11:31:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.924 anon_hugepages=0 00:04:09.924 11:31:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.924 11:31:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.924 11:31:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.924 11:31:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.924 11:31:40 -- setup/common.sh@18 -- # local node= 00:04:09.924 11:31:40 -- setup/common.sh@19 -- # local var val 00:04:09.924 11:31:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.924 11:31:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.924 11:31:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.924 11:31:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.924 11:31:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.924 11:31:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283788 kB' 'MemFree: 44147776 kB' 'MemAvailable: 47854372 kB' 'Buffers: 4100 kB' 'Cached: 9966112 kB' 'SwapCached: 0 kB' 'Active: 6749724 kB' 'Inactive: 3693068 kB' 'Active(anon): 6355308 kB' 'Inactive(anon): 0 kB' 'Active(file): 394416 kB' 'Inactive(file): 3693068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475884 kB' 'Mapped: 173300 kB' 'Shmem: 5882728 kB' 'KReclaimable: 232404 kB' 'Slab: 1064236 kB' 'SReclaimable: 232404 kB' 'SUnreclaim: 831832 kB' 'KernelStack: 21888 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481920 kB' 'Committed_AS: 7517388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 79744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2268532 kB' 'DirectMap2M: 21534720 kB' 'DirectMap1G: 46137344 kB' 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.924 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.924 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.925 11:31:40 -- setup/common.sh@33 -- # echo 1024 00:04:09.925 11:31:40 -- setup/common.sh@33 -- # return 0 00:04:09.925 11:31:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.925 11:31:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.925 11:31:40 -- setup/hugepages.sh@27 -- # local node 00:04:09.925 11:31:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.925 11:31:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.925 11:31:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.925 11:31:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.925 11:31:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.925 11:31:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.925 11:31:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.925 11:31:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.925 11:31:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.925 11:31:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.925 11:31:40 -- setup/common.sh@18 -- # local node=0 00:04:09.925 11:31:40 -- setup/common.sh@19 -- # local var val 00:04:09.925 11:31:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.925 11:31:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.925 11:31:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.925 11:31:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.925 11:31:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.925 11:31:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26476056 kB' 'MemUsed: 6158380 kB' 'SwapCached: 0 kB' 'Active: 2366068 kB' 'Inactive: 163156 kB' 'Active(anon): 2202448 kB' 'Inactive(anon): 0 kB' 'Active(file): 163620 kB' 'Inactive(file): 163156 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2183032 kB' 'Mapped: 58240 kB' 'AnonPages: 349428 kB' 'Shmem: 1856256 kB' 'KernelStack: 12072 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103712 kB' 'Slab: 528948 kB' 'SReclaimable: 103712 kB' 'SUnreclaim: 425236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.925 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.925 11:31:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # continue 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 11:31:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 11:31:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.926 11:31:40 -- setup/common.sh@33 -- # echo 0 00:04:09.926 11:31:40 -- setup/common.sh@33 -- # return 0 00:04:09.926 11:31:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.926 11:31:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.926 11:31:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.926 11:31:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.926 11:31:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.926 node0=1024 expecting 1024 00:04:09.926 11:31:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.926 00:04:09.926 real 0m7.008s 00:04:09.926 user 0m2.548s 00:04:09.927 sys 0m4.570s 00:04:09.927 11:31:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.927 11:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:09.927 ************************************ 00:04:09.927 END TEST no_shrink_alloc 00:04:09.927 ************************************ 00:04:09.927 11:31:40 -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.927 11:31:40 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.927 11:31:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.927 11:31:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.927 11:31:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.927 11:31:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.927 11:31:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.927 11:31:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.927 11:31:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.927 11:31:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.927 11:31:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.927 11:31:40 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.927 11:31:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.927 11:31:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.927 00:04:09.927 real 0m26.565s 00:04:09.927 user 0m9.095s 00:04:09.927 sys 0m15.700s 00:04:09.927 11:31:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.927 11:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:09.927 ************************************ 00:04:09.927 END TEST hugepages 00:04:09.927 ************************************ 00:04:09.927 11:31:40 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:09.927 11:31:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.927 11:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.927 11:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:09.927 ************************************ 00:04:09.927 START TEST driver 00:04:09.927 ************************************ 00:04:09.927 11:31:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:10.186 * Looking for test storage... 00:04:10.186 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:10.186 11:31:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.186 11:31:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.186 11:31:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.186 11:31:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.186 11:31:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.186 11:31:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.186 11:31:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.186 11:31:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.186 11:31:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.186 11:31:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.186 11:31:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.186 11:31:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.186 11:31:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.186 11:31:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.186 11:31:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.186 11:31:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.186 11:31:40 -- scripts/common.sh@344 -- # : 1 00:04:10.186 11:31:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.186 11:31:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.186 11:31:40 -- scripts/common.sh@364 -- # decimal 1 00:04:10.186 11:31:40 -- scripts/common.sh@352 -- # local d=1 00:04:10.186 11:31:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.186 11:31:40 -- scripts/common.sh@354 -- # echo 1 00:04:10.186 11:31:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.186 11:31:40 -- scripts/common.sh@365 -- # decimal 2 00:04:10.186 11:31:40 -- scripts/common.sh@352 -- # local d=2 00:04:10.186 11:31:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.186 11:31:40 -- scripts/common.sh@354 -- # echo 2 00:04:10.186 11:31:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.186 11:31:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.186 11:31:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.186 11:31:40 -- scripts/common.sh@367 -- # return 0 00:04:10.186 11:31:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.186 11:31:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.186 --rc genhtml_branch_coverage=1 00:04:10.186 --rc genhtml_function_coverage=1 00:04:10.186 --rc genhtml_legend=1 00:04:10.186 --rc geninfo_all_blocks=1 00:04:10.186 --rc geninfo_unexecuted_blocks=1 00:04:10.186 00:04:10.186 ' 00:04:10.186 11:31:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.186 --rc genhtml_branch_coverage=1 00:04:10.186 --rc genhtml_function_coverage=1 00:04:10.186 --rc genhtml_legend=1 00:04:10.186 --rc geninfo_all_blocks=1 00:04:10.186 --rc geninfo_unexecuted_blocks=1 00:04:10.186 00:04:10.186 ' 00:04:10.186 11:31:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.186 --rc genhtml_branch_coverage=1 00:04:10.186 --rc genhtml_function_coverage=1 00:04:10.186 --rc genhtml_legend=1 00:04:10.186 --rc geninfo_all_blocks=1 00:04:10.186 --rc geninfo_unexecuted_blocks=1 00:04:10.186 00:04:10.186 ' 00:04:10.186 11:31:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.186 --rc genhtml_branch_coverage=1 00:04:10.186 --rc genhtml_function_coverage=1 00:04:10.186 --rc genhtml_legend=1 00:04:10.186 --rc geninfo_all_blocks=1 00:04:10.186 --rc geninfo_unexecuted_blocks=1 00:04:10.186 00:04:10.186 ' 00:04:10.186 11:31:40 -- setup/driver.sh@68 -- # setup reset 00:04:10.186 11:31:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.186 11:31:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.459 11:31:45 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:15.459 11:31:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.459 11:31:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.459 11:31:45 -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 ************************************ 00:04:15.459 START TEST guess_driver 00:04:15.459 ************************************ 00:04:15.459 11:31:45 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:15.459 11:31:45 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:15.459 11:31:45 -- setup/driver.sh@47 -- # local fail=0 00:04:15.459 11:31:45 -- setup/driver.sh@49 -- # pick_driver 00:04:15.459 11:31:45 -- setup/driver.sh@36 -- # vfio 00:04:15.459 11:31:45 -- setup/driver.sh@21 -- # local iommu_grups 00:04:15.459 11:31:45 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:15.459 11:31:45 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:15.459 11:31:45 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:15.459 11:31:45 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:15.459 11:31:45 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:15.459 11:31:45 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:15.459 11:31:45 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:15.459 11:31:45 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:15.459 11:31:45 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:15.459 11:31:45 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:15.459 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:15.459 11:31:45 -- setup/driver.sh@30 -- # return 0 00:04:15.459 11:31:45 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:15.459 11:31:45 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:15.459 11:31:45 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:15.459 11:31:45 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:15.459 Looking for driver=vfio-pci 00:04:15.459 11:31:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.459 11:31:45 -- setup/driver.sh@45 -- # setup output config 00:04:15.459 11:31:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.459 11:31:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.745 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.745 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.745 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.746 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.746 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.746 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.746 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.746 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.746 11:31:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.746 11:31:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.746 11:31:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.746 11:31:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.746 11:31:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.746 11:31:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.651 11:31:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.651 11:31:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.651 11:31:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.651 11:31:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:20.651 11:31:51 -- setup/driver.sh@65 -- # setup reset 00:04:20.651 11:31:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.651 11:31:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.994 00:04:25.994 real 0m10.174s 00:04:25.994 user 0m2.562s 00:04:25.994 sys 0m4.960s 00:04:25.994 11:31:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.994 11:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.994 ************************************ 00:04:25.994 END TEST guess_driver 00:04:25.994 ************************************ 00:04:25.994 00:04:25.994 real 0m15.268s 00:04:25.994 user 0m4.100s 00:04:25.994 sys 0m7.776s 00:04:25.994 11:31:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.994 11:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.994 ************************************ 00:04:25.994 END TEST driver 00:04:25.994 ************************************ 00:04:25.994 11:31:55 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:25.994 11:31:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.994 11:31:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.994 11:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.994 ************************************ 00:04:25.994 START TEST devices 00:04:25.994 ************************************ 00:04:25.994 11:31:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:25.994 * Looking for test storage... 00:04:25.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:25.994 11:31:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:25.994 11:31:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:25.994 11:31:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.994 11:31:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.994 11:31:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.994 11:31:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.994 11:31:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.994 11:31:56 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.994 11:31:56 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.994 11:31:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.994 11:31:56 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.994 11:31:56 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.994 11:31:56 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.994 11:31:56 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.994 11:31:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.994 11:31:56 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.994 11:31:56 -- scripts/common.sh@344 -- # : 1 00:04:25.994 11:31:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.994 11:31:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.994 11:31:56 -- scripts/common.sh@364 -- # decimal 1 00:04:25.994 11:31:56 -- scripts/common.sh@352 -- # local d=1 00:04:25.994 11:31:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.994 11:31:56 -- scripts/common.sh@354 -- # echo 1 00:04:25.994 11:31:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.994 11:31:56 -- scripts/common.sh@365 -- # decimal 2 00:04:25.994 11:31:56 -- scripts/common.sh@352 -- # local d=2 00:04:25.994 11:31:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.994 11:31:56 -- scripts/common.sh@354 -- # echo 2 00:04:25.994 11:31:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.994 11:31:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.994 11:31:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.994 11:31:56 -- scripts/common.sh@367 -- # return 0 00:04:25.994 11:31:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.994 11:31:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.994 --rc genhtml_branch_coverage=1 00:04:25.994 --rc genhtml_function_coverage=1 00:04:25.994 --rc genhtml_legend=1 00:04:25.994 --rc geninfo_all_blocks=1 00:04:25.994 --rc geninfo_unexecuted_blocks=1 00:04:25.994 00:04:25.994 ' 00:04:25.994 11:31:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.994 --rc genhtml_branch_coverage=1 00:04:25.994 --rc genhtml_function_coverage=1 00:04:25.994 --rc genhtml_legend=1 00:04:25.994 --rc geninfo_all_blocks=1 00:04:25.994 --rc geninfo_unexecuted_blocks=1 00:04:25.994 00:04:25.994 ' 00:04:25.994 11:31:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.994 --rc genhtml_branch_coverage=1 00:04:25.994 --rc genhtml_function_coverage=1 00:04:25.994 --rc genhtml_legend=1 00:04:25.994 --rc geninfo_all_blocks=1 00:04:25.994 --rc geninfo_unexecuted_blocks=1 00:04:25.994 00:04:25.994 ' 00:04:25.994 11:31:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.994 --rc genhtml_branch_coverage=1 00:04:25.994 --rc genhtml_function_coverage=1 00:04:25.994 --rc genhtml_legend=1 00:04:25.994 --rc geninfo_all_blocks=1 00:04:25.994 --rc geninfo_unexecuted_blocks=1 00:04:25.994 00:04:25.994 ' 00:04:25.994 11:31:56 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:25.994 11:31:56 -- setup/devices.sh@192 -- # setup reset 00:04:25.994 11:31:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.994 11:31:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.187 11:31:59 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:30.187 11:31:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:30.187 11:31:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:30.187 11:31:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:30.187 11:31:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:30.187 11:31:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:30.187 11:31:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:30.187 11:31:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.187 11:31:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:30.187 11:31:59 -- setup/devices.sh@196 -- # blocks=() 00:04:30.187 11:31:59 -- setup/devices.sh@196 -- # declare -a blocks 00:04:30.187 11:31:59 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:30.187 11:31:59 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:30.187 11:31:59 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:30.187 11:31:59 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:30.187 11:31:59 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:30.187 11:31:59 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:30.187 11:31:59 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:30.187 11:31:59 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:30.187 11:31:59 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:30.187 11:31:59 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:30.187 11:31:59 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:30.187 No valid GPT data, bailing 00:04:30.187 11:31:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.187 11:31:59 -- scripts/common.sh@393 -- # pt= 00:04:30.187 11:31:59 -- scripts/common.sh@394 -- # return 1 00:04:30.187 11:31:59 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:30.187 11:31:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:30.187 11:31:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:30.187 11:31:59 -- setup/common.sh@80 -- # echo 2000398934016 00:04:30.187 11:31:59 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:30.187 11:31:59 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:30.187 11:31:59 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:30.187 11:31:59 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:30.187 11:31:59 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:30.187 11:31:59 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:30.187 11:31:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.187 11:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.187 11:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:30.187 ************************************ 00:04:30.187 START TEST nvme_mount 00:04:30.187 ************************************ 00:04:30.187 11:31:59 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:30.187 11:31:59 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:30.187 11:31:59 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:30.187 11:31:59 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.187 11:31:59 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.187 11:31:59 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:30.187 11:31:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.187 11:31:59 -- setup/common.sh@40 -- # local part_no=1 00:04:30.187 11:31:59 -- setup/common.sh@41 -- # local size=1073741824 00:04:30.187 11:31:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.187 11:31:59 -- setup/common.sh@44 -- # parts=() 00:04:30.187 11:31:59 -- setup/common.sh@44 -- # local parts 00:04:30.187 11:31:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.187 11:31:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.187 11:32:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.187 11:32:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:30.187 11:32:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.187 11:32:00 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:30.187 11:32:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.187 11:32:00 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.447 Creating new GPT entries in memory. 00:04:30.447 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.447 other utilities. 00:04:30.447 11:32:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.447 11:32:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.447 11:32:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.447 11:32:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.447 11:32:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.825 Creating new GPT entries in memory. 00:04:31.825 The operation has completed successfully. 00:04:31.825 11:32:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.825 11:32:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.825 11:32:02 -- setup/common.sh@62 -- # wait 3550723 00:04:31.825 11:32:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.825 11:32:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:31.825 11:32:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.825 11:32:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.825 11:32:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.825 11:32:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.825 11:32:02 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.825 11:32:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.825 11:32:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.825 11:32:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.825 11:32:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.825 11:32:02 -- setup/devices.sh@53 -- # local found=0 00:04:31.825 11:32:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.825 11:32:02 -- setup/devices.sh@56 -- # : 00:04:31.825 11:32:02 -- setup/devices.sh@59 -- # local pci status 00:04:31.825 11:32:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.825 11:32:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.825 11:32:02 -- setup/devices.sh@47 -- # setup output config 00:04:31.825 11:32:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.825 11:32:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:35.113 11:32:05 -- setup/devices.sh@63 -- # found=1 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.113 11:32:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.113 11:32:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.113 11:32:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.113 11:32:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.113 11:32:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.113 11:32:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:35.113 11:32:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.113 11:32:05 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.113 11:32:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.113 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.113 11:32:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.113 11:32:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.372 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.372 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.372 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.372 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.372 11:32:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:35.372 11:32:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:35.372 11:32:05 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.372 11:32:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:35.372 11:32:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:35.372 11:32:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.372 11:32:05 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.372 11:32:05 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:35.372 11:32:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.372 11:32:05 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.372 11:32:05 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.372 11:32:05 -- setup/devices.sh@53 -- # local found=0 00:04:35.372 11:32:05 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.372 11:32:05 -- setup/devices.sh@56 -- # : 00:04:35.372 11:32:05 -- setup/devices.sh@59 -- # local pci status 00:04:35.372 11:32:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.372 11:32:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:35.372 11:32:05 -- setup/devices.sh@47 -- # setup output config 00:04:35.372 11:32:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.372 11:32:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.660 11:32:08 -- setup/devices.sh@63 -- # found=1 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.660 11:32:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.660 11:32:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.660 11:32:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.660 11:32:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.660 11:32:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.660 11:32:09 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.660 11:32:09 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:38.660 11:32:09 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:38.660 11:32:09 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:38.660 11:32:09 -- setup/devices.sh@50 -- # local mount_point= 00:04:38.660 11:32:09 -- setup/devices.sh@51 -- # local test_file= 00:04:38.660 11:32:09 -- setup/devices.sh@53 -- # local found=0 00:04:38.660 11:32:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.660 11:32:09 -- setup/devices.sh@59 -- # local pci status 00:04:38.660 11:32:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.660 11:32:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:38.660 11:32:09 -- setup/devices.sh@47 -- # setup output config 00:04:38.660 11:32:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.660 11:32:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:41.946 11:32:12 -- setup/devices.sh@63 -- # found=1 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.946 11:32:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.946 11:32:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.946 11:32:12 -- setup/devices.sh@68 -- # return 0 00:04:41.946 11:32:12 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:41.946 11:32:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.946 11:32:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.946 11:32:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.946 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.946 00:04:41.946 real 0m12.359s 00:04:41.946 user 0m3.367s 00:04:41.946 sys 0m6.872s 00:04:41.946 11:32:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.946 11:32:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.946 ************************************ 00:04:41.946 END TEST nvme_mount 00:04:41.946 ************************************ 00:04:41.946 11:32:12 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:41.946 11:32:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.946 11:32:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.946 11:32:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.946 ************************************ 00:04:41.946 START TEST dm_mount 00:04:41.946 ************************************ 00:04:41.946 11:32:12 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:41.946 11:32:12 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:41.946 11:32:12 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:41.946 11:32:12 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:41.946 11:32:12 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:41.946 11:32:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.946 11:32:12 -- setup/common.sh@40 -- # local part_no=2 00:04:41.946 11:32:12 -- setup/common.sh@41 -- # local size=1073741824 00:04:41.946 11:32:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.946 11:32:12 -- setup/common.sh@44 -- # parts=() 00:04:41.946 11:32:12 -- setup/common.sh@44 -- # local parts 00:04:41.946 11:32:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.946 11:32:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.946 11:32:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.947 11:32:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:41.947 11:32:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.947 11:32:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.947 11:32:12 -- setup/common.sh@46 -- # (( part++ )) 00:04:41.947 11:32:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.947 11:32:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.947 11:32:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.947 11:32:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:42.882 Creating new GPT entries in memory. 00:04:42.882 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.882 other utilities. 00:04:42.882 11:32:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.882 11:32:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.882 11:32:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.882 11:32:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.882 11:32:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.826 Creating new GPT entries in memory. 00:04:43.826 The operation has completed successfully. 00:04:43.826 11:32:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.826 11:32:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.826 11:32:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.826 11:32:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.826 11:32:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.211 The operation has completed successfully. 00:04:45.211 11:32:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:45.211 11:32:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.211 11:32:15 -- setup/common.sh@62 -- # wait 3555231 00:04:45.211 11:32:15 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.211 11:32:15 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:45.211 11:32:15 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.211 11:32:15 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.211 11:32:15 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.211 11:32:15 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.211 11:32:15 -- setup/devices.sh@161 -- # break 00:04:45.211 11:32:15 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.211 11:32:15 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.211 11:32:15 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:45.211 11:32:15 -- setup/devices.sh@166 -- # dm=dm-2 00:04:45.211 11:32:15 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:45.211 11:32:15 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:45.211 11:32:15 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:45.211 11:32:15 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.211 11:32:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:45.211 11:32:15 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.211 11:32:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.211 11:32:15 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:45.211 11:32:15 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.211 11:32:15 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.211 11:32:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.211 11:32:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:45.211 11:32:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.211 11:32:15 -- setup/devices.sh@53 -- # local found=0 00:04:45.211 11:32:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.211 11:32:15 -- setup/devices.sh@56 -- # : 00:04:45.211 11:32:15 -- setup/devices.sh@59 -- # local pci status 00:04:45.211 11:32:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.211 11:32:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.211 11:32:15 -- setup/devices.sh@47 -- # setup output config 00:04:45.211 11:32:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.211 11:32:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:48.501 11:32:18 -- setup/devices.sh@63 -- # found=1 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.501 11:32:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.501 11:32:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:48.501 11:32:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:48.501 11:32:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.501 11:32:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.501 11:32:18 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:48.501 11:32:19 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:48.501 11:32:19 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:48.501 11:32:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:48.501 11:32:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:48.501 11:32:19 -- setup/devices.sh@51 -- # local test_file= 00:04:48.501 11:32:19 -- setup/devices.sh@53 -- # local found=0 00:04:48.501 11:32:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:48.501 11:32:19 -- setup/devices.sh@59 -- # local pci status 00:04:48.501 11:32:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.501 11:32:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:48.501 11:32:19 -- setup/devices.sh@47 -- # setup output config 00:04:48.501 11:32:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.501 11:32:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:51.791 11:32:21 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:51.791 11:32:21 -- setup/devices.sh@63 -- # found=1 00:04:51.791 11:32:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.791 11:32:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.791 11:32:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.791 11:32:22 -- setup/devices.sh@68 -- # return 0 00:04:51.791 11:32:22 -- setup/devices.sh@187 -- # cleanup_dm 00:04:51.791 11:32:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:51.791 11:32:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.791 11:32:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.791 11:32:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.791 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.791 11:32:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.791 00:04:51.791 real 0m9.902s 00:04:51.791 user 0m2.339s 00:04:51.791 sys 0m4.628s 00:04:51.791 11:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.791 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:51.791 ************************************ 00:04:51.791 END TEST dm_mount 00:04:51.791 ************************************ 00:04:51.791 11:32:22 -- setup/devices.sh@1 -- # cleanup 00:04:51.791 11:32:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.791 11:32:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.791 11:32:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.791 11:32:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.791 11:32:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.051 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.051 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.051 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.051 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.051 11:32:22 -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.051 11:32:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:52.051 11:32:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.051 11:32:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.051 11:32:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.051 11:32:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.051 11:32:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.051 00:04:52.051 real 0m26.802s 00:04:52.051 user 0m7.306s 00:04:52.051 sys 0m14.372s 00:04:52.051 11:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.051 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.051 ************************************ 00:04:52.051 END TEST devices 00:04:52.051 ************************************ 00:04:52.310 00:04:52.310 real 1m33.106s 00:04:52.310 user 0m27.893s 00:04:52.310 sys 0m52.480s 00:04:52.310 11:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.310 11:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.310 ************************************ 00:04:52.310 END TEST setup.sh 00:04:52.310 ************************************ 00:04:52.310 11:32:22 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:55.602 Hugepages 00:04:55.602 node hugesize free / total 00:04:55.602 node0 1048576kB 0 / 0 00:04:55.602 node0 2048kB 2048 / 2048 00:04:55.602 node1 1048576kB 0 / 0 00:04:55.602 node1 2048kB 0 / 0 00:04:55.602 00:04:55.602 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.602 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:55.602 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:55.602 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:55.602 11:32:26 -- spdk/autotest.sh@128 -- # uname -s 00:04:55.602 11:32:26 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:55.602 11:32:26 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:55.602 11:32:26 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:58.893 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.893 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.893 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.893 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.153 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:01.062 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.321 11:32:31 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:02.259 11:32:32 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:02.259 11:32:32 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:02.259 11:32:32 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:02.259 11:32:32 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:02.259 11:32:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:02.259 11:32:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:02.259 11:32:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.259 11:32:32 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.259 11:32:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:02.259 11:32:32 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:02.259 11:32:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:02.259 11:32:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.549 Waiting for block devices as requested 00:05:05.549 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.810 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.810 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.810 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.810 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.070 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.070 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.070 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.329 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:06.329 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:06.329 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:06.589 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:06.589 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.589 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.848 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.848 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.848 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:07.106 11:32:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:07.107 11:32:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:07.107 11:32:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:07.107 11:32:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:07.107 11:32:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:07.107 11:32:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.107 11:32:37 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:07.107 11:32:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:07.107 11:32:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:07.107 11:32:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:07.107 11:32:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:07.107 11:32:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:07.107 11:32:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:07.107 11:32:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:07.107 11:32:37 -- common/autotest_common.sh@1552 -- # continue 00:05:07.107 11:32:37 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:07.107 11:32:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.107 11:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.107 11:32:37 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:07.107 11:32:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.107 11:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.107 11:32:37 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:10.479 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.479 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.383 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.383 11:32:42 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:12.383 11:32:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.383 11:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.642 11:32:43 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:12.642 11:32:43 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:12.642 11:32:43 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.642 11:32:43 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:12.642 11:32:43 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:12.642 11:32:43 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:12.642 11:32:43 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:12.642 11:32:43 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:12.643 11:32:43 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.643 11:32:43 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.643 11:32:43 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:12.643 11:32:43 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:12.643 11:32:43 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:12.643 11:32:43 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:12.643 11:32:43 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:12.643 11:32:43 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:12.643 11:32:43 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.643 11:32:43 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:12.643 11:32:43 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:12.643 11:32:43 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:12.643 11:32:43 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=3565446 00:05:12.643 11:32:43 -- common/autotest_common.sh@1593 -- # waitforlisten 3565446 00:05:12.643 11:32:43 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.643 11:32:43 -- common/autotest_common.sh@829 -- # '[' -z 3565446 ']' 00:05:12.643 11:32:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.643 11:32:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.643 11:32:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.643 11:32:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.643 11:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.643 [2024-12-03 11:32:43.191987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.643 [2024-12-03 11:32:43.192039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565446 ] 00:05:12.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.902 [2024-12-03 11:32:43.262926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.902 [2024-12-03 11:32:43.336613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.902 [2024-12-03 11:32:43.336739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.471 11:32:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.471 11:32:43 -- common/autotest_common.sh@862 -- # return 0 00:05:13.471 11:32:43 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:13.471 11:32:43 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:13.471 11:32:43 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:16.759 nvme0n1 00:05:16.759 11:32:46 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:16.759 [2024-12-03 11:32:47.149365] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:16.759 request: 00:05:16.759 { 00:05:16.759 "nvme_ctrlr_name": "nvme0", 00:05:16.759 "password": "test", 00:05:16.759 "method": "bdev_nvme_opal_revert", 00:05:16.759 "req_id": 1 00:05:16.759 } 00:05:16.759 Got JSON-RPC error response 00:05:16.759 response: 00:05:16.759 { 00:05:16.759 "code": -32602, 00:05:16.759 "message": "Invalid parameters" 00:05:16.759 } 00:05:16.759 11:32:47 -- common/autotest_common.sh@1599 -- # true 00:05:16.759 11:32:47 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:16.759 11:32:47 -- common/autotest_common.sh@1603 -- # killprocess 3565446 00:05:16.759 11:32:47 -- common/autotest_common.sh@936 -- # '[' -z 3565446 ']' 00:05:16.759 11:32:47 -- common/autotest_common.sh@940 -- # kill -0 3565446 00:05:16.759 11:32:47 -- common/autotest_common.sh@941 -- # uname 00:05:16.759 11:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.759 11:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3565446 00:05:16.759 11:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.759 11:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.759 11:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3565446' 00:05:16.759 killing process with pid 3565446 00:05:16.759 11:32:47 -- common/autotest_common.sh@955 -- # kill 3565446 00:05:16.759 11:32:47 -- common/autotest_common.sh@960 -- # wait 3565446 00:05:19.295 11:32:49 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:19.295 11:32:49 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:19.295 11:32:49 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:19.295 11:32:49 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:19.295 11:32:49 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:19.295 11:32:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.295 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.295 11:32:49 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:19.295 11:32:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.295 11:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.295 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.295 ************************************ 00:05:19.295 START TEST env 00:05:19.295 ************************************ 00:05:19.295 11:32:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:19.295 * Looking for test storage... 00:05:19.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:19.295 11:32:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.295 11:32:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.295 11:32:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.295 11:32:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.295 11:32:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.295 11:32:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.295 11:32:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.295 11:32:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.295 11:32:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.295 11:32:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.295 11:32:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.295 11:32:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.295 11:32:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.295 11:32:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.295 11:32:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.295 11:32:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.295 11:32:49 -- scripts/common.sh@344 -- # : 1 00:05:19.295 11:32:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.295 11:32:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.295 11:32:49 -- scripts/common.sh@364 -- # decimal 1 00:05:19.295 11:32:49 -- scripts/common.sh@352 -- # local d=1 00:05:19.296 11:32:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.296 11:32:49 -- scripts/common.sh@354 -- # echo 1 00:05:19.296 11:32:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.296 11:32:49 -- scripts/common.sh@365 -- # decimal 2 00:05:19.556 11:32:49 -- scripts/common.sh@352 -- # local d=2 00:05:19.556 11:32:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.556 11:32:49 -- scripts/common.sh@354 -- # echo 2 00:05:19.556 11:32:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.556 11:32:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.556 11:32:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.556 11:32:49 -- scripts/common.sh@367 -- # return 0 00:05:19.556 11:32:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.556 11:32:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.556 --rc genhtml_branch_coverage=1 00:05:19.556 --rc genhtml_function_coverage=1 00:05:19.556 --rc genhtml_legend=1 00:05:19.556 --rc geninfo_all_blocks=1 00:05:19.556 --rc geninfo_unexecuted_blocks=1 00:05:19.556 00:05:19.556 ' 00:05:19.556 11:32:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.556 --rc genhtml_branch_coverage=1 00:05:19.556 --rc genhtml_function_coverage=1 00:05:19.556 --rc genhtml_legend=1 00:05:19.556 --rc geninfo_all_blocks=1 00:05:19.556 --rc geninfo_unexecuted_blocks=1 00:05:19.556 00:05:19.556 ' 00:05:19.556 11:32:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.556 --rc genhtml_branch_coverage=1 00:05:19.556 --rc genhtml_function_coverage=1 00:05:19.556 --rc genhtml_legend=1 00:05:19.556 --rc geninfo_all_blocks=1 00:05:19.556 --rc geninfo_unexecuted_blocks=1 00:05:19.556 00:05:19.556 ' 00:05:19.556 11:32:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.556 --rc genhtml_branch_coverage=1 00:05:19.556 --rc genhtml_function_coverage=1 00:05:19.556 --rc genhtml_legend=1 00:05:19.556 --rc geninfo_all_blocks=1 00:05:19.556 --rc geninfo_unexecuted_blocks=1 00:05:19.556 00:05:19.556 ' 00:05:19.556 11:32:49 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.556 11:32:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.556 11:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.556 11:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.556 ************************************ 00:05:19.556 START TEST env_memory 00:05:19.556 ************************************ 00:05:19.556 11:32:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.556 00:05:19.556 00:05:19.556 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.556 http://cunit.sourceforge.net/ 00:05:19.556 00:05:19.556 00:05:19.556 Suite: memory 00:05:19.556 Test: alloc and free memory map ...[2024-12-03 11:32:49.965391] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.556 passed 00:05:19.556 Test: mem map translation ...[2024-12-03 11:32:49.983560] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.556 [2024-12-03 11:32:49.983575] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.556 [2024-12-03 11:32:49.983610] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.556 [2024-12-03 11:32:49.983618] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.556 passed 00:05:19.556 Test: mem map registration ...[2024-12-03 11:32:50.018841] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.556 [2024-12-03 11:32:50.018856] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.556 passed 00:05:19.556 Test: mem map adjacent registrations ...passed 00:05:19.556 00:05:19.556 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.556 suites 1 1 n/a 0 0 00:05:19.556 tests 4 4 4 0 0 00:05:19.556 asserts 152 152 152 0 n/a 00:05:19.556 00:05:19.556 Elapsed time = 0.131 seconds 00:05:19.556 00:05:19.556 real 0m0.145s 00:05:19.556 user 0m0.134s 00:05:19.556 sys 0m0.011s 00:05:19.556 11:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.556 11:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.556 ************************************ 00:05:19.556 END TEST env_memory 00:05:19.556 ************************************ 00:05:19.556 11:32:50 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.556 11:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.556 11:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.556 11:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.556 ************************************ 00:05:19.556 START TEST env_vtophys 00:05:19.556 ************************************ 00:05:19.556 11:32:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.556 EAL: lib.eal log level changed from notice to debug 00:05:19.556 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.556 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.556 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.556 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.556 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.556 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.556 EAL: Detected lcore 6 as core 6 on socket 0 00:05:19.556 EAL: Detected lcore 7 as core 8 on socket 0 00:05:19.556 EAL: Detected lcore 8 as core 9 on socket 0 00:05:19.556 EAL: Detected lcore 9 as core 10 on socket 0 00:05:19.556 EAL: Detected lcore 10 as core 11 on socket 0 00:05:19.556 EAL: Detected lcore 11 as core 12 on socket 0 00:05:19.556 EAL: Detected lcore 12 as core 13 on socket 0 00:05:19.556 EAL: Detected lcore 13 as core 14 on socket 0 00:05:19.556 EAL: Detected lcore 14 as core 16 on socket 0 00:05:19.556 EAL: Detected lcore 15 as core 17 on socket 0 00:05:19.556 EAL: Detected lcore 16 as core 18 on socket 0 00:05:19.556 EAL: Detected lcore 17 as core 19 on socket 0 00:05:19.556 EAL: Detected lcore 18 as core 20 on socket 0 00:05:19.556 EAL: Detected lcore 19 as core 21 on socket 0 00:05:19.556 EAL: Detected lcore 20 as core 22 on socket 0 00:05:19.556 EAL: Detected lcore 21 as core 24 on socket 0 00:05:19.556 EAL: Detected lcore 22 as core 25 on socket 0 00:05:19.556 EAL: Detected lcore 23 as core 26 on socket 0 00:05:19.557 EAL: Detected lcore 24 as core 27 on socket 0 00:05:19.557 EAL: Detected lcore 25 as core 28 on socket 0 00:05:19.557 EAL: Detected lcore 26 as core 29 on socket 0 00:05:19.557 EAL: Detected lcore 27 as core 30 on socket 0 00:05:19.557 EAL: Detected lcore 28 as core 0 on socket 1 00:05:19.557 EAL: Detected lcore 29 as core 1 on socket 1 00:05:19.557 EAL: Detected lcore 30 as core 2 on socket 1 00:05:19.557 EAL: Detected lcore 31 as core 3 on socket 1 00:05:19.557 EAL: Detected lcore 32 as core 4 on socket 1 00:05:19.557 EAL: Detected lcore 33 as core 5 on socket 1 00:05:19.557 EAL: Detected lcore 34 as core 6 on socket 1 00:05:19.557 EAL: Detected lcore 35 as core 8 on socket 1 00:05:19.557 EAL: Detected lcore 36 as core 9 on socket 1 00:05:19.557 EAL: Detected lcore 37 as core 10 on socket 1 00:05:19.557 EAL: Detected lcore 38 as core 11 on socket 1 00:05:19.557 EAL: Detected lcore 39 as core 12 on socket 1 00:05:19.557 EAL: Detected lcore 40 as core 13 on socket 1 00:05:19.557 EAL: Detected lcore 41 as core 14 on socket 1 00:05:19.557 EAL: Detected lcore 42 as core 16 on socket 1 00:05:19.557 EAL: Detected lcore 43 as core 17 on socket 1 00:05:19.557 EAL: Detected lcore 44 as core 18 on socket 1 00:05:19.557 EAL: Detected lcore 45 as core 19 on socket 1 00:05:19.557 EAL: Detected lcore 46 as core 20 on socket 1 00:05:19.557 EAL: Detected lcore 47 as core 21 on socket 1 00:05:19.557 EAL: Detected lcore 48 as core 22 on socket 1 00:05:19.557 EAL: Detected lcore 49 as core 24 on socket 1 00:05:19.557 EAL: Detected lcore 50 as core 25 on socket 1 00:05:19.557 EAL: Detected lcore 51 as core 26 on socket 1 00:05:19.557 EAL: Detected lcore 52 as core 27 on socket 1 00:05:19.557 EAL: Detected lcore 53 as core 28 on socket 1 00:05:19.557 EAL: Detected lcore 54 as core 29 on socket 1 00:05:19.557 EAL: Detected lcore 55 as core 30 on socket 1 00:05:19.557 EAL: Detected lcore 56 as core 0 on socket 0 00:05:19.557 EAL: Detected lcore 57 as core 1 on socket 0 00:05:19.557 EAL: Detected lcore 58 as core 2 on socket 0 00:05:19.557 EAL: Detected lcore 59 as core 3 on socket 0 00:05:19.557 EAL: Detected lcore 60 as core 4 on socket 0 00:05:19.557 EAL: Detected lcore 61 as core 5 on socket 0 00:05:19.557 EAL: Detected lcore 62 as core 6 on socket 0 00:05:19.557 EAL: Detected lcore 63 as core 8 on socket 0 00:05:19.557 EAL: Detected lcore 64 as core 9 on socket 0 00:05:19.557 EAL: Detected lcore 65 as core 10 on socket 0 00:05:19.557 EAL: Detected lcore 66 as core 11 on socket 0 00:05:19.557 EAL: Detected lcore 67 as core 12 on socket 0 00:05:19.557 EAL: Detected lcore 68 as core 13 on socket 0 00:05:19.557 EAL: Detected lcore 69 as core 14 on socket 0 00:05:19.557 EAL: Detected lcore 70 as core 16 on socket 0 00:05:19.557 EAL: Detected lcore 71 as core 17 on socket 0 00:05:19.557 EAL: Detected lcore 72 as core 18 on socket 0 00:05:19.557 EAL: Detected lcore 73 as core 19 on socket 0 00:05:19.557 EAL: Detected lcore 74 as core 20 on socket 0 00:05:19.557 EAL: Detected lcore 75 as core 21 on socket 0 00:05:19.557 EAL: Detected lcore 76 as core 22 on socket 0 00:05:19.557 EAL: Detected lcore 77 as core 24 on socket 0 00:05:19.557 EAL: Detected lcore 78 as core 25 on socket 0 00:05:19.557 EAL: Detected lcore 79 as core 26 on socket 0 00:05:19.557 EAL: Detected lcore 80 as core 27 on socket 0 00:05:19.557 EAL: Detected lcore 81 as core 28 on socket 0 00:05:19.557 EAL: Detected lcore 82 as core 29 on socket 0 00:05:19.557 EAL: Detected lcore 83 as core 30 on socket 0 00:05:19.557 EAL: Detected lcore 84 as core 0 on socket 1 00:05:19.557 EAL: Detected lcore 85 as core 1 on socket 1 00:05:19.557 EAL: Detected lcore 86 as core 2 on socket 1 00:05:19.557 EAL: Detected lcore 87 as core 3 on socket 1 00:05:19.557 EAL: Detected lcore 88 as core 4 on socket 1 00:05:19.557 EAL: Detected lcore 89 as core 5 on socket 1 00:05:19.557 EAL: Detected lcore 90 as core 6 on socket 1 00:05:19.557 EAL: Detected lcore 91 as core 8 on socket 1 00:05:19.557 EAL: Detected lcore 92 as core 9 on socket 1 00:05:19.557 EAL: Detected lcore 93 as core 10 on socket 1 00:05:19.557 EAL: Detected lcore 94 as core 11 on socket 1 00:05:19.557 EAL: Detected lcore 95 as core 12 on socket 1 00:05:19.557 EAL: Detected lcore 96 as core 13 on socket 1 00:05:19.557 EAL: Detected lcore 97 as core 14 on socket 1 00:05:19.557 EAL: Detected lcore 98 as core 16 on socket 1 00:05:19.557 EAL: Detected lcore 99 as core 17 on socket 1 00:05:19.557 EAL: Detected lcore 100 as core 18 on socket 1 00:05:19.557 EAL: Detected lcore 101 as core 19 on socket 1 00:05:19.557 EAL: Detected lcore 102 as core 20 on socket 1 00:05:19.557 EAL: Detected lcore 103 as core 21 on socket 1 00:05:19.557 EAL: Detected lcore 104 as core 22 on socket 1 00:05:19.557 EAL: Detected lcore 105 as core 24 on socket 1 00:05:19.557 EAL: Detected lcore 106 as core 25 on socket 1 00:05:19.557 EAL: Detected lcore 107 as core 26 on socket 1 00:05:19.557 EAL: Detected lcore 108 as core 27 on socket 1 00:05:19.557 EAL: Detected lcore 109 as core 28 on socket 1 00:05:19.557 EAL: Detected lcore 110 as core 29 on socket 1 00:05:19.557 EAL: Detected lcore 111 as core 30 on socket 1 00:05:19.557 EAL: Maximum logical cores by configuration: 128 00:05:19.557 EAL: Detected CPU lcores: 112 00:05:19.557 EAL: Detected NUMA nodes: 2 00:05:19.557 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:19.557 EAL: Detected shared linkage of DPDK 00:05:19.557 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.557 EAL: Bus pci wants IOVA as 'DC' 00:05:19.557 EAL: Buses did not request a specific IOVA mode. 00:05:19.557 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.557 EAL: Selected IOVA mode 'VA' 00:05:19.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.557 EAL: Probing VFIO support... 00:05:19.557 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.557 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.557 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.557 EAL: VFIO support initialized 00:05:19.557 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.557 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.557 EAL: Setting up physically contiguous memory... 00:05:19.557 EAL: Setting maximum number of open files to 524288 00:05:19.557 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.557 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.557 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.557 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.557 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.557 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.557 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.557 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.557 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.557 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.557 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.557 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.557 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.557 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.557 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.557 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.557 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.557 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.557 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.557 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.557 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.557 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.557 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.557 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.557 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.558 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.558 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.558 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.558 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.558 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.558 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.558 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.558 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.558 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.558 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.558 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.558 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.558 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.558 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.558 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.558 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.558 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.558 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.558 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.558 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.558 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.558 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.558 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.558 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.558 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.558 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.558 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.558 EAL: Hugepages will be freed exactly as allocated. 00:05:19.558 EAL: No shared files mode enabled, IPC is disabled 00:05:19.558 EAL: No shared files mode enabled, IPC is disabled 00:05:19.558 EAL: TSC frequency is ~2500000 KHz 00:05:19.558 EAL: Main lcore 0 is ready (tid=7fa899c26a00;cpuset=[0]) 00:05:19.558 EAL: Trying to obtain current memory policy. 00:05:19.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.558 EAL: Restoring previous memory policy: 0 00:05:19.558 EAL: request: mp_malloc_sync 00:05:19.558 EAL: No shared files mode enabled, IPC is disabled 00:05:19.558 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.558 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.817 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.817 00:05:19.817 00:05:19.817 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.817 http://cunit.sourceforge.net/ 00:05:19.817 00:05:19.817 00:05:19.817 Suite: components_suite 00:05:19.817 Test: vtophys_malloc_test ...passed 00:05:19.817 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.817 EAL: Restoring previous memory policy: 4 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.817 EAL: request: mp_malloc_sync 00:05:19.817 EAL: No shared files mode enabled, IPC is disabled 00:05:19.817 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.817 EAL: Trying to obtain current memory policy. 00:05:19.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.077 EAL: Restoring previous memory policy: 4 00:05:20.077 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.077 EAL: request: mp_malloc_sync 00:05:20.077 EAL: No shared files mode enabled, IPC is disabled 00:05:20.077 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.077 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.077 EAL: request: mp_malloc_sync 00:05:20.077 EAL: No shared files mode enabled, IPC is disabled 00:05:20.077 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.077 EAL: Trying to obtain current memory policy. 00:05:20.077 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.336 EAL: Restoring previous memory policy: 4 00:05:20.336 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.336 EAL: request: mp_malloc_sync 00:05:20.336 EAL: No shared files mode enabled, IPC is disabled 00:05:20.336 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.595 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.595 EAL: request: mp_malloc_sync 00:05:20.595 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.596 passed 00:05:20.596 00:05:20.596 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.596 suites 1 1 n/a 0 0 00:05:20.596 tests 2 2 2 0 0 00:05:20.596 asserts 497 497 497 0 n/a 00:05:20.596 00:05:20.596 Elapsed time = 0.968 seconds 00:05:20.596 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.596 EAL: request: mp_malloc_sync 00:05:20.596 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.596 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 00:05:20.596 real 0m1.097s 00:05:20.596 user 0m0.636s 00:05:20.596 sys 0m0.434s 00:05:20.596 11:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.596 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.596 ************************************ 00:05:20.596 END TEST env_vtophys 00:05:20.596 ************************************ 00:05:20.855 11:32:51 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.855 11:32:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.855 11:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.855 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.855 ************************************ 00:05:20.855 START TEST env_pci 00:05:20.855 ************************************ 00:05:20.855 11:32:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.855 00:05:20.855 00:05:20.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.855 http://cunit.sourceforge.net/ 00:05:20.855 00:05:20.855 00:05:20.855 Suite: pci 00:05:20.855 Test: pci_hook ...[2024-12-03 11:32:51.265889] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3567006 has claimed it 00:05:20.855 EAL: Cannot find device (10000:00:01.0) 00:05:20.855 EAL: Failed to attach device on primary process 00:05:20.855 passed 00:05:20.855 00:05:20.855 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.855 suites 1 1 n/a 0 0 00:05:20.855 tests 1 1 1 0 0 00:05:20.855 asserts 25 25 25 0 n/a 00:05:20.855 00:05:20.855 Elapsed time = 0.035 seconds 00:05:20.855 00:05:20.855 real 0m0.058s 00:05:20.855 user 0m0.016s 00:05:20.855 sys 0m0.042s 00:05:20.855 11:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.855 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.855 ************************************ 00:05:20.855 END TEST env_pci 00:05:20.855 ************************************ 00:05:20.855 11:32:51 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.855 11:32:51 -- env/env.sh@15 -- # uname 00:05:20.855 11:32:51 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.855 11:32:51 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.855 11:32:51 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.855 11:32:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:20.855 11:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.855 11:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.855 ************************************ 00:05:20.855 START TEST env_dpdk_post_init 00:05:20.855 ************************************ 00:05:20.855 11:32:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.855 EAL: Detected CPU lcores: 112 00:05:20.855 EAL: Detected NUMA nodes: 2 00:05:20.855 EAL: Detected shared linkage of DPDK 00:05:20.855 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.855 EAL: Selected IOVA mode 'VA' 00:05:20.855 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.855 EAL: VFIO support initialized 00:05:20.855 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.115 EAL: Using IOMMU type 1 (Type 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:21.115 EAL: Ignore mapping IO port bar(1) 00:05:21.115 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:22.055 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:26.256 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:26.256 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:26.256 Starting DPDK initialization... 00:05:26.256 Starting SPDK post initialization... 00:05:26.256 SPDK NVMe probe 00:05:26.256 Attaching to 0000:d8:00.0 00:05:26.257 Attached to 0000:d8:00.0 00:05:26.257 Cleaning up... 00:05:26.257 00:05:26.257 real 0m5.344s 00:05:26.257 user 0m3.983s 00:05:26.257 sys 0m0.411s 00:05:26.257 11:32:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.257 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.257 ************************************ 00:05:26.257 END TEST env_dpdk_post_init 00:05:26.257 ************************************ 00:05:26.257 11:32:56 -- env/env.sh@26 -- # uname 00:05:26.257 11:32:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:26.257 11:32:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.257 11:32:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.257 11:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.257 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.257 ************************************ 00:05:26.257 START TEST env_mem_callbacks 00:05:26.257 ************************************ 00:05:26.257 11:32:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.257 EAL: Detected CPU lcores: 112 00:05:26.257 EAL: Detected NUMA nodes: 2 00:05:26.257 EAL: Detected shared linkage of DPDK 00:05:26.257 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.257 EAL: Selected IOVA mode 'VA' 00:05:26.257 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.257 EAL: VFIO support initialized 00:05:26.257 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.257 00:05:26.257 00:05:26.257 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.257 http://cunit.sourceforge.net/ 00:05:26.257 00:05:26.257 00:05:26.257 Suite: memory 00:05:26.257 Test: test ... 00:05:26.257 register 0x200000200000 2097152 00:05:26.257 malloc 3145728 00:05:26.257 register 0x200000400000 4194304 00:05:26.257 buf 0x200000500000 len 3145728 PASSED 00:05:26.257 malloc 64 00:05:26.257 buf 0x2000004fff40 len 64 PASSED 00:05:26.257 malloc 4194304 00:05:26.257 register 0x200000800000 6291456 00:05:26.257 buf 0x200000a00000 len 4194304 PASSED 00:05:26.257 free 0x200000500000 3145728 00:05:26.257 free 0x2000004fff40 64 00:05:26.257 unregister 0x200000400000 4194304 PASSED 00:05:26.257 free 0x200000a00000 4194304 00:05:26.257 unregister 0x200000800000 6291456 PASSED 00:05:26.257 malloc 8388608 00:05:26.257 register 0x200000400000 10485760 00:05:26.257 buf 0x200000600000 len 8388608 PASSED 00:05:26.257 free 0x200000600000 8388608 00:05:26.257 unregister 0x200000400000 10485760 PASSED 00:05:26.257 passed 00:05:26.257 00:05:26.257 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.257 suites 1 1 n/a 0 0 00:05:26.257 tests 1 1 1 0 0 00:05:26.257 asserts 15 15 15 0 n/a 00:05:26.257 00:05:26.257 Elapsed time = 0.005 seconds 00:05:26.257 00:05:26.257 real 0m0.065s 00:05:26.257 user 0m0.020s 00:05:26.257 sys 0m0.044s 00:05:26.257 11:32:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.257 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.257 ************************************ 00:05:26.257 END TEST env_mem_callbacks 00:05:26.257 ************************************ 00:05:26.257 00:05:26.257 real 0m7.117s 00:05:26.257 user 0m4.965s 00:05:26.257 sys 0m1.224s 00:05:26.257 11:32:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.257 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.257 ************************************ 00:05:26.257 END TEST env 00:05:26.257 ************************************ 00:05:26.517 11:32:56 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.517 11:32:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.517 11:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.517 11:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.517 ************************************ 00:05:26.517 START TEST rpc 00:05:26.517 ************************************ 00:05:26.517 11:32:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:26.517 * Looking for test storage... 00:05:26.517 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.517 11:32:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.517 11:32:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.517 11:32:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.517 11:32:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.517 11:32:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.517 11:32:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.517 11:32:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.517 11:32:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.517 11:32:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.517 11:32:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.517 11:32:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.517 11:32:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.517 11:32:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.517 11:32:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.517 11:32:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.517 11:32:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.517 11:32:57 -- scripts/common.sh@344 -- # : 1 00:05:26.517 11:32:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.517 11:32:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.517 11:32:57 -- scripts/common.sh@364 -- # decimal 1 00:05:26.517 11:32:57 -- scripts/common.sh@352 -- # local d=1 00:05:26.517 11:32:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.517 11:32:57 -- scripts/common.sh@354 -- # echo 1 00:05:26.517 11:32:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.517 11:32:57 -- scripts/common.sh@365 -- # decimal 2 00:05:26.517 11:32:57 -- scripts/common.sh@352 -- # local d=2 00:05:26.517 11:32:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.517 11:32:57 -- scripts/common.sh@354 -- # echo 2 00:05:26.517 11:32:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.517 11:32:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.517 11:32:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.517 11:32:57 -- scripts/common.sh@367 -- # return 0 00:05:26.517 11:32:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.517 11:32:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.517 --rc genhtml_branch_coverage=1 00:05:26.517 --rc genhtml_function_coverage=1 00:05:26.517 --rc genhtml_legend=1 00:05:26.517 --rc geninfo_all_blocks=1 00:05:26.517 --rc geninfo_unexecuted_blocks=1 00:05:26.517 00:05:26.517 ' 00:05:26.517 11:32:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.517 --rc genhtml_branch_coverage=1 00:05:26.517 --rc genhtml_function_coverage=1 00:05:26.517 --rc genhtml_legend=1 00:05:26.517 --rc geninfo_all_blocks=1 00:05:26.517 --rc geninfo_unexecuted_blocks=1 00:05:26.517 00:05:26.517 ' 00:05:26.517 11:32:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.517 --rc genhtml_branch_coverage=1 00:05:26.517 --rc genhtml_function_coverage=1 00:05:26.517 --rc genhtml_legend=1 00:05:26.517 --rc geninfo_all_blocks=1 00:05:26.517 --rc geninfo_unexecuted_blocks=1 00:05:26.517 00:05:26.517 ' 00:05:26.517 11:32:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.517 --rc genhtml_branch_coverage=1 00:05:26.517 --rc genhtml_function_coverage=1 00:05:26.517 --rc genhtml_legend=1 00:05:26.517 --rc geninfo_all_blocks=1 00:05:26.517 --rc geninfo_unexecuted_blocks=1 00:05:26.517 00:05:26.517 ' 00:05:26.517 11:32:57 -- rpc/rpc.sh@65 -- # spdk_pid=3568107 00:05:26.517 11:32:57 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:26.517 11:32:57 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.517 11:32:57 -- rpc/rpc.sh@67 -- # waitforlisten 3568107 00:05:26.517 11:32:57 -- common/autotest_common.sh@829 -- # '[' -z 3568107 ']' 00:05:26.517 11:32:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.517 11:32:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.517 11:32:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.517 11:32:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.517 11:32:57 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 [2024-12-03 11:32:57.131073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.776 [2024-12-03 11:32:57.131138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568107 ] 00:05:26.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.776 [2024-12-03 11:32:57.200064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.776 [2024-12-03 11:32:57.272951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.776 [2024-12-03 11:32:57.273058] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:26.776 [2024-12-03 11:32:57.273069] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3568107' to capture a snapshot of events at runtime. 00:05:26.776 [2024-12-03 11:32:57.273078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3568107 for offline analysis/debug. 00:05:26.776 [2024-12-03 11:32:57.273097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.344 11:32:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.344 11:32:57 -- common/autotest_common.sh@862 -- # return 0 00:05:27.344 11:32:57 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:27.344 11:32:57 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:27.344 11:32:57 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:27.344 11:32:57 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:27.345 11:32:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.345 11:32:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.345 11:32:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.345 ************************************ 00:05:27.345 START TEST rpc_integrity 00:05:27.345 ************************************ 00:05:27.345 11:32:57 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:27.345 11:32:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.345 11:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.345 11:32:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.345 11:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.345 11:32:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.345 11:32:57 -- rpc/rpc.sh@13 -- # jq length 00:05:27.604 11:32:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.604 11:32:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.604 11:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.604 11:32:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.604 11:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.604 11:32:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:27.604 11:32:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.604 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.604 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.604 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.604 11:32:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.604 { 00:05:27.604 "name": "Malloc0", 00:05:27.604 "aliases": [ 00:05:27.604 "5f9f1281-6dc7-48d9-817c-6972568910ae" 00:05:27.604 ], 00:05:27.604 "product_name": "Malloc disk", 00:05:27.604 "block_size": 512, 00:05:27.604 "num_blocks": 16384, 00:05:27.604 "uuid": "5f9f1281-6dc7-48d9-817c-6972568910ae", 00:05:27.604 "assigned_rate_limits": { 00:05:27.604 "rw_ios_per_sec": 0, 00:05:27.604 "rw_mbytes_per_sec": 0, 00:05:27.604 "r_mbytes_per_sec": 0, 00:05:27.604 "w_mbytes_per_sec": 0 00:05:27.604 }, 00:05:27.604 "claimed": false, 00:05:27.604 "zoned": false, 00:05:27.604 "supported_io_types": { 00:05:27.604 "read": true, 00:05:27.604 "write": true, 00:05:27.604 "unmap": true, 00:05:27.604 "write_zeroes": true, 00:05:27.604 "flush": true, 00:05:27.604 "reset": true, 00:05:27.604 "compare": false, 00:05:27.604 "compare_and_write": false, 00:05:27.604 "abort": true, 00:05:27.604 "nvme_admin": false, 00:05:27.604 "nvme_io": false 00:05:27.604 }, 00:05:27.604 "memory_domains": [ 00:05:27.604 { 00:05:27.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.604 "dma_device_type": 2 00:05:27.604 } 00:05:27.604 ], 00:05:27.604 "driver_specific": {} 00:05:27.604 } 00:05:27.604 ]' 00:05:27.604 11:32:58 -- rpc/rpc.sh@17 -- # jq length 00:05:27.604 11:32:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.604 11:32:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:27.604 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.604 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.604 [2024-12-03 11:32:58.071123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:27.604 [2024-12-03 11:32:58.071154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.604 [2024-12-03 11:32:58.071167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db4f40 00:05:27.604 [2024-12-03 11:32:58.071175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.604 [2024-12-03 11:32:58.072185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.604 [2024-12-03 11:32:58.072206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.604 Passthru0 00:05:27.604 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.604 11:32:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.604 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.604 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.604 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.604 11:32:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.604 { 00:05:27.604 "name": "Malloc0", 00:05:27.604 "aliases": [ 00:05:27.604 "5f9f1281-6dc7-48d9-817c-6972568910ae" 00:05:27.604 ], 00:05:27.604 "product_name": "Malloc disk", 00:05:27.604 "block_size": 512, 00:05:27.604 "num_blocks": 16384, 00:05:27.604 "uuid": "5f9f1281-6dc7-48d9-817c-6972568910ae", 00:05:27.604 "assigned_rate_limits": { 00:05:27.604 "rw_ios_per_sec": 0, 00:05:27.604 "rw_mbytes_per_sec": 0, 00:05:27.604 "r_mbytes_per_sec": 0, 00:05:27.604 "w_mbytes_per_sec": 0 00:05:27.604 }, 00:05:27.604 "claimed": true, 00:05:27.604 "claim_type": "exclusive_write", 00:05:27.604 "zoned": false, 00:05:27.604 "supported_io_types": { 00:05:27.604 "read": true, 00:05:27.604 "write": true, 00:05:27.604 "unmap": true, 00:05:27.604 "write_zeroes": true, 00:05:27.604 "flush": true, 00:05:27.604 "reset": true, 00:05:27.604 "compare": false, 00:05:27.604 "compare_and_write": false, 00:05:27.604 "abort": true, 00:05:27.604 "nvme_admin": false, 00:05:27.604 "nvme_io": false 00:05:27.604 }, 00:05:27.604 "memory_domains": [ 00:05:27.604 { 00:05:27.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.604 "dma_device_type": 2 00:05:27.604 } 00:05:27.604 ], 00:05:27.604 "driver_specific": {} 00:05:27.604 }, 00:05:27.604 { 00:05:27.604 "name": "Passthru0", 00:05:27.604 "aliases": [ 00:05:27.604 "7307ee0c-d2f6-564a-8fc8-0fa4b670c6f8" 00:05:27.604 ], 00:05:27.604 "product_name": "passthru", 00:05:27.604 "block_size": 512, 00:05:27.604 "num_blocks": 16384, 00:05:27.604 "uuid": "7307ee0c-d2f6-564a-8fc8-0fa4b670c6f8", 00:05:27.604 "assigned_rate_limits": { 00:05:27.604 "rw_ios_per_sec": 0, 00:05:27.604 "rw_mbytes_per_sec": 0, 00:05:27.604 "r_mbytes_per_sec": 0, 00:05:27.604 "w_mbytes_per_sec": 0 00:05:27.604 }, 00:05:27.604 "claimed": false, 00:05:27.604 "zoned": false, 00:05:27.604 "supported_io_types": { 00:05:27.604 "read": true, 00:05:27.604 "write": true, 00:05:27.604 "unmap": true, 00:05:27.604 "write_zeroes": true, 00:05:27.604 "flush": true, 00:05:27.604 "reset": true, 00:05:27.604 "compare": false, 00:05:27.604 "compare_and_write": false, 00:05:27.604 "abort": true, 00:05:27.604 "nvme_admin": false, 00:05:27.604 "nvme_io": false 00:05:27.604 }, 00:05:27.604 "memory_domains": [ 00:05:27.604 { 00:05:27.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.604 "dma_device_type": 2 00:05:27.604 } 00:05:27.604 ], 00:05:27.604 "driver_specific": { 00:05:27.604 "passthru": { 00:05:27.604 "name": "Passthru0", 00:05:27.604 "base_bdev_name": "Malloc0" 00:05:27.604 } 00:05:27.604 } 00:05:27.604 } 00:05:27.604 ]' 00:05:27.604 11:32:58 -- rpc/rpc.sh@21 -- # jq length 00:05:27.605 11:32:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.605 11:32:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.605 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.605 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.605 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.605 11:32:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:27.605 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.605 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.605 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.605 11:32:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.605 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.605 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.605 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.605 11:32:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.605 11:32:58 -- rpc/rpc.sh@26 -- # jq length 00:05:27.864 11:32:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.864 00:05:27.864 real 0m0.277s 00:05:27.864 user 0m0.170s 00:05:27.864 sys 0m0.040s 00:05:27.864 11:32:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 END TEST rpc_integrity 00:05:27.864 ************************************ 00:05:27.864 11:32:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:27.864 11:32:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.864 11:32:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 START TEST rpc_plugins 00:05:27.864 ************************************ 00:05:27.864 11:32:58 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:27.864 11:32:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:27.864 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.864 11:32:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:27.864 11:32:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:27.864 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.864 11:32:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:27.864 { 00:05:27.864 "name": "Malloc1", 00:05:27.864 "aliases": [ 00:05:27.864 "92a079b8-d577-4a94-a647-2bbc193250b0" 00:05:27.864 ], 00:05:27.864 "product_name": "Malloc disk", 00:05:27.864 "block_size": 4096, 00:05:27.864 "num_blocks": 256, 00:05:27.864 "uuid": "92a079b8-d577-4a94-a647-2bbc193250b0", 00:05:27.864 "assigned_rate_limits": { 00:05:27.864 "rw_ios_per_sec": 0, 00:05:27.864 "rw_mbytes_per_sec": 0, 00:05:27.864 "r_mbytes_per_sec": 0, 00:05:27.864 "w_mbytes_per_sec": 0 00:05:27.864 }, 00:05:27.864 "claimed": false, 00:05:27.864 "zoned": false, 00:05:27.864 "supported_io_types": { 00:05:27.864 "read": true, 00:05:27.864 "write": true, 00:05:27.864 "unmap": true, 00:05:27.864 "write_zeroes": true, 00:05:27.864 "flush": true, 00:05:27.864 "reset": true, 00:05:27.864 "compare": false, 00:05:27.864 "compare_and_write": false, 00:05:27.864 "abort": true, 00:05:27.864 "nvme_admin": false, 00:05:27.864 "nvme_io": false 00:05:27.864 }, 00:05:27.864 "memory_domains": [ 00:05:27.864 { 00:05:27.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.864 "dma_device_type": 2 00:05:27.864 } 00:05:27.864 ], 00:05:27.864 "driver_specific": {} 00:05:27.864 } 00:05:27.864 ]' 00:05:27.864 11:32:58 -- rpc/rpc.sh@32 -- # jq length 00:05:27.864 11:32:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:27.864 11:32:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:27.864 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.864 11:32:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:27.864 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.864 11:32:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:27.864 11:32:58 -- rpc/rpc.sh@36 -- # jq length 00:05:27.864 11:32:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.864 00:05:27.864 real 0m0.133s 00:05:27.864 user 0m0.078s 00:05:27.864 sys 0m0.019s 00:05:27.864 11:32:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 END TEST rpc_plugins 00:05:27.864 ************************************ 00:05:27.864 11:32:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.864 11:32:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.864 11:32:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 ************************************ 00:05:27.864 START TEST rpc_trace_cmd_test 00:05:27.864 ************************************ 00:05:27.864 11:32:58 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:27.864 11:32:58 -- rpc/rpc.sh@40 -- # local info 00:05:27.864 11:32:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.864 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.864 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:27.864 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.864 11:32:58 -- rpc/rpc.sh@42 -- # info='{ 00:05:27.864 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3568107", 00:05:27.864 "tpoint_group_mask": "0x8", 00:05:27.864 "iscsi_conn": { 00:05:27.864 "mask": "0x2", 00:05:27.864 "tpoint_mask": "0x0" 00:05:27.864 }, 00:05:27.864 "scsi": { 00:05:27.864 "mask": "0x4", 00:05:27.864 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "bdev": { 00:05:27.865 "mask": "0x8", 00:05:27.865 "tpoint_mask": "0xffffffffffffffff" 00:05:27.865 }, 00:05:27.865 "nvmf_rdma": { 00:05:27.865 "mask": "0x10", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "nvmf_tcp": { 00:05:27.865 "mask": "0x20", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "ftl": { 00:05:27.865 "mask": "0x40", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "blobfs": { 00:05:27.865 "mask": "0x80", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "dsa": { 00:05:27.865 "mask": "0x200", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "thread": { 00:05:27.865 "mask": "0x400", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "nvme_pcie": { 00:05:27.865 "mask": "0x800", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "iaa": { 00:05:27.865 "mask": "0x1000", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "nvme_tcp": { 00:05:27.865 "mask": "0x2000", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 }, 00:05:27.865 "bdev_nvme": { 00:05:27.865 "mask": "0x4000", 00:05:27.865 "tpoint_mask": "0x0" 00:05:27.865 } 00:05:27.865 }' 00:05:27.865 11:32:58 -- rpc/rpc.sh@43 -- # jq length 00:05:28.124 11:32:58 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:28.124 11:32:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:28.124 11:32:58 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:28.124 11:32:58 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:28.124 11:32:58 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:28.124 11:32:58 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:28.124 11:32:58 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:28.124 11:32:58 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:28.124 11:32:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:28.124 00:05:28.124 real 0m0.222s 00:05:28.124 user 0m0.178s 00:05:28.124 sys 0m0.035s 00:05:28.124 11:32:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.124 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 END TEST rpc_trace_cmd_test 00:05:28.124 ************************************ 00:05:28.124 11:32:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:28.124 11:32:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:28.124 11:32:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:28.124 11:32:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.124 11:32:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.124 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 ************************************ 00:05:28.124 START TEST rpc_daemon_integrity 00:05:28.124 ************************************ 00:05:28.124 11:32:58 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:28.124 11:32:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.124 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.124 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.124 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.124 11:32:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.124 11:32:58 -- rpc/rpc.sh@13 -- # jq length 00:05:28.383 11:32:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:28.383 11:32:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:28.383 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.383 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.383 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.383 11:32:58 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:28.383 11:32:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:28.383 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.383 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.383 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.383 11:32:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:28.383 { 00:05:28.383 "name": "Malloc2", 00:05:28.383 "aliases": [ 00:05:28.383 "d4fa8e85-495c-49c2-8e32-3877a3fd8145" 00:05:28.383 ], 00:05:28.383 "product_name": "Malloc disk", 00:05:28.383 "block_size": 512, 00:05:28.383 "num_blocks": 16384, 00:05:28.383 "uuid": "d4fa8e85-495c-49c2-8e32-3877a3fd8145", 00:05:28.383 "assigned_rate_limits": { 00:05:28.383 "rw_ios_per_sec": 0, 00:05:28.383 "rw_mbytes_per_sec": 0, 00:05:28.383 "r_mbytes_per_sec": 0, 00:05:28.383 "w_mbytes_per_sec": 0 00:05:28.383 }, 00:05:28.383 "claimed": false, 00:05:28.383 "zoned": false, 00:05:28.383 "supported_io_types": { 00:05:28.383 "read": true, 00:05:28.383 "write": true, 00:05:28.383 "unmap": true, 00:05:28.383 "write_zeroes": true, 00:05:28.383 "flush": true, 00:05:28.383 "reset": true, 00:05:28.383 "compare": false, 00:05:28.383 "compare_and_write": false, 00:05:28.383 "abort": true, 00:05:28.383 "nvme_admin": false, 00:05:28.383 "nvme_io": false 00:05:28.383 }, 00:05:28.383 "memory_domains": [ 00:05:28.383 { 00:05:28.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.383 "dma_device_type": 2 00:05:28.383 } 00:05:28.383 ], 00:05:28.383 "driver_specific": {} 00:05:28.383 } 00:05:28.383 ]' 00:05:28.383 11:32:58 -- rpc/rpc.sh@17 -- # jq length 00:05:28.383 11:32:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:28.383 11:32:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:28.383 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.383 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.383 [2024-12-03 11:32:58.857239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:28.383 [2024-12-03 11:32:58.857268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:28.383 [2024-12-03 11:32:58.857282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db6740 00:05:28.383 [2024-12-03 11:32:58.857290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:28.383 [2024-12-03 11:32:58.858201] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:28.383 [2024-12-03 11:32:58.858221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:28.383 Passthru0 00:05:28.383 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.383 11:32:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:28.383 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.383 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.383 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.383 11:32:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:28.383 { 00:05:28.383 "name": "Malloc2", 00:05:28.383 "aliases": [ 00:05:28.383 "d4fa8e85-495c-49c2-8e32-3877a3fd8145" 00:05:28.383 ], 00:05:28.383 "product_name": "Malloc disk", 00:05:28.383 "block_size": 512, 00:05:28.383 "num_blocks": 16384, 00:05:28.383 "uuid": "d4fa8e85-495c-49c2-8e32-3877a3fd8145", 00:05:28.383 "assigned_rate_limits": { 00:05:28.383 "rw_ios_per_sec": 0, 00:05:28.383 "rw_mbytes_per_sec": 0, 00:05:28.383 "r_mbytes_per_sec": 0, 00:05:28.383 "w_mbytes_per_sec": 0 00:05:28.383 }, 00:05:28.383 "claimed": true, 00:05:28.383 "claim_type": "exclusive_write", 00:05:28.383 "zoned": false, 00:05:28.383 "supported_io_types": { 00:05:28.383 "read": true, 00:05:28.383 "write": true, 00:05:28.383 "unmap": true, 00:05:28.383 "write_zeroes": true, 00:05:28.383 "flush": true, 00:05:28.384 "reset": true, 00:05:28.384 "compare": false, 00:05:28.384 "compare_and_write": false, 00:05:28.384 "abort": true, 00:05:28.384 "nvme_admin": false, 00:05:28.384 "nvme_io": false 00:05:28.384 }, 00:05:28.384 "memory_domains": [ 00:05:28.384 { 00:05:28.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.384 "dma_device_type": 2 00:05:28.384 } 00:05:28.384 ], 00:05:28.384 "driver_specific": {} 00:05:28.384 }, 00:05:28.384 { 00:05:28.384 "name": "Passthru0", 00:05:28.384 "aliases": [ 00:05:28.384 "f5e5e32a-359d-5bb6-8b42-e023bf75f557" 00:05:28.384 ], 00:05:28.384 "product_name": "passthru", 00:05:28.384 "block_size": 512, 00:05:28.384 "num_blocks": 16384, 00:05:28.384 "uuid": "f5e5e32a-359d-5bb6-8b42-e023bf75f557", 00:05:28.384 "assigned_rate_limits": { 00:05:28.384 "rw_ios_per_sec": 0, 00:05:28.384 "rw_mbytes_per_sec": 0, 00:05:28.384 "r_mbytes_per_sec": 0, 00:05:28.384 "w_mbytes_per_sec": 0 00:05:28.384 }, 00:05:28.384 "claimed": false, 00:05:28.384 "zoned": false, 00:05:28.384 "supported_io_types": { 00:05:28.384 "read": true, 00:05:28.384 "write": true, 00:05:28.384 "unmap": true, 00:05:28.384 "write_zeroes": true, 00:05:28.384 "flush": true, 00:05:28.384 "reset": true, 00:05:28.384 "compare": false, 00:05:28.384 "compare_and_write": false, 00:05:28.384 "abort": true, 00:05:28.384 "nvme_admin": false, 00:05:28.384 "nvme_io": false 00:05:28.384 }, 00:05:28.384 "memory_domains": [ 00:05:28.384 { 00:05:28.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.384 "dma_device_type": 2 00:05:28.384 } 00:05:28.384 ], 00:05:28.384 "driver_specific": { 00:05:28.384 "passthru": { 00:05:28.384 "name": "Passthru0", 00:05:28.384 "base_bdev_name": "Malloc2" 00:05:28.384 } 00:05:28.384 } 00:05:28.384 } 00:05:28.384 ]' 00:05:28.384 11:32:58 -- rpc/rpc.sh@21 -- # jq length 00:05:28.384 11:32:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:28.384 11:32:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:28.384 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.384 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.384 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.384 11:32:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:28.384 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.384 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.384 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.384 11:32:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:28.384 11:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.384 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.384 11:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.384 11:32:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:28.384 11:32:58 -- rpc/rpc.sh@26 -- # jq length 00:05:28.642 11:32:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:28.642 00:05:28.642 real 0m0.285s 00:05:28.642 user 0m0.179s 00:05:28.642 sys 0m0.045s 00:05:28.642 11:32:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.642 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.642 ************************************ 00:05:28.642 END TEST rpc_daemon_integrity 00:05:28.642 ************************************ 00:05:28.642 11:32:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:28.642 11:32:59 -- rpc/rpc.sh@84 -- # killprocess 3568107 00:05:28.642 11:32:59 -- common/autotest_common.sh@936 -- # '[' -z 3568107 ']' 00:05:28.642 11:32:59 -- common/autotest_common.sh@940 -- # kill -0 3568107 00:05:28.642 11:32:59 -- common/autotest_common.sh@941 -- # uname 00:05:28.642 11:32:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.642 11:32:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3568107 00:05:28.642 11:32:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.642 11:32:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.642 11:32:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3568107' 00:05:28.642 killing process with pid 3568107 00:05:28.642 11:32:59 -- common/autotest_common.sh@955 -- # kill 3568107 00:05:28.642 11:32:59 -- common/autotest_common.sh@960 -- # wait 3568107 00:05:28.900 00:05:28.900 real 0m2.551s 00:05:28.900 user 0m3.156s 00:05:28.900 sys 0m0.771s 00:05:28.900 11:32:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.900 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.900 ************************************ 00:05:28.900 END TEST rpc 00:05:28.900 ************************************ 00:05:28.900 11:32:59 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.900 11:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.900 11:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.900 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.900 ************************************ 00:05:28.900 START TEST rpc_client 00:05:28.900 ************************************ 00:05:28.900 11:32:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.159 * Looking for test storage... 00:05:29.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:29.159 11:32:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.159 11:32:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.159 11:32:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.159 11:32:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.159 11:32:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.159 11:32:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.159 11:32:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.159 11:32:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.159 11:32:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.159 11:32:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.159 11:32:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.159 11:32:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.159 11:32:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.159 11:32:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.159 11:32:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.159 11:32:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.159 11:32:59 -- scripts/common.sh@344 -- # : 1 00:05:29.159 11:32:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.159 11:32:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.159 11:32:59 -- scripts/common.sh@364 -- # decimal 1 00:05:29.159 11:32:59 -- scripts/common.sh@352 -- # local d=1 00:05:29.159 11:32:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.159 11:32:59 -- scripts/common.sh@354 -- # echo 1 00:05:29.159 11:32:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.159 11:32:59 -- scripts/common.sh@365 -- # decimal 2 00:05:29.159 11:32:59 -- scripts/common.sh@352 -- # local d=2 00:05:29.159 11:32:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.159 11:32:59 -- scripts/common.sh@354 -- # echo 2 00:05:29.159 11:32:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.159 11:32:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.159 11:32:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.159 11:32:59 -- scripts/common.sh@367 -- # return 0 00:05:29.159 11:32:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.159 11:32:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.159 --rc genhtml_branch_coverage=1 00:05:29.159 --rc genhtml_function_coverage=1 00:05:29.159 --rc genhtml_legend=1 00:05:29.159 --rc geninfo_all_blocks=1 00:05:29.159 --rc geninfo_unexecuted_blocks=1 00:05:29.159 00:05:29.159 ' 00:05:29.159 11:32:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.159 --rc genhtml_branch_coverage=1 00:05:29.159 --rc genhtml_function_coverage=1 00:05:29.159 --rc genhtml_legend=1 00:05:29.159 --rc geninfo_all_blocks=1 00:05:29.159 --rc geninfo_unexecuted_blocks=1 00:05:29.159 00:05:29.159 ' 00:05:29.159 11:32:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.159 --rc genhtml_branch_coverage=1 00:05:29.159 --rc genhtml_function_coverage=1 00:05:29.159 --rc genhtml_legend=1 00:05:29.159 --rc geninfo_all_blocks=1 00:05:29.159 --rc geninfo_unexecuted_blocks=1 00:05:29.159 00:05:29.159 ' 00:05:29.159 11:32:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.159 --rc genhtml_branch_coverage=1 00:05:29.159 --rc genhtml_function_coverage=1 00:05:29.159 --rc genhtml_legend=1 00:05:29.159 --rc geninfo_all_blocks=1 00:05:29.159 --rc geninfo_unexecuted_blocks=1 00:05:29.159 00:05:29.159 ' 00:05:29.159 11:32:59 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:29.159 OK 00:05:29.159 11:32:59 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.159 00:05:29.159 real 0m0.206s 00:05:29.159 user 0m0.121s 00:05:29.159 sys 0m0.098s 00:05:29.159 11:32:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.159 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.159 ************************************ 00:05:29.159 END TEST rpc_client 00:05:29.159 ************************************ 00:05:29.160 11:32:59 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.160 11:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.160 11:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.160 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.160 ************************************ 00:05:29.160 START TEST json_config 00:05:29.160 ************************************ 00:05:29.160 11:32:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.418 11:32:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.418 11:32:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.418 11:32:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.418 11:32:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.418 11:32:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.418 11:32:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.418 11:32:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.418 11:32:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.418 11:32:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.418 11:32:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.418 11:32:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.418 11:32:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.418 11:32:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.418 11:32:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.418 11:32:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.418 11:32:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.418 11:32:59 -- scripts/common.sh@344 -- # : 1 00:05:29.418 11:32:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.418 11:32:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.418 11:32:59 -- scripts/common.sh@364 -- # decimal 1 00:05:29.418 11:32:59 -- scripts/common.sh@352 -- # local d=1 00:05:29.418 11:32:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.418 11:32:59 -- scripts/common.sh@354 -- # echo 1 00:05:29.418 11:32:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.418 11:32:59 -- scripts/common.sh@365 -- # decimal 2 00:05:29.418 11:32:59 -- scripts/common.sh@352 -- # local d=2 00:05:29.418 11:32:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.418 11:32:59 -- scripts/common.sh@354 -- # echo 2 00:05:29.418 11:32:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.419 11:32:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.419 11:32:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.419 11:32:59 -- scripts/common.sh@367 -- # return 0 00:05:29.419 11:32:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.419 11:32:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.419 --rc genhtml_branch_coverage=1 00:05:29.419 --rc genhtml_function_coverage=1 00:05:29.419 --rc genhtml_legend=1 00:05:29.419 --rc geninfo_all_blocks=1 00:05:29.419 --rc geninfo_unexecuted_blocks=1 00:05:29.419 00:05:29.419 ' 00:05:29.419 11:32:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.419 --rc genhtml_branch_coverage=1 00:05:29.419 --rc genhtml_function_coverage=1 00:05:29.419 --rc genhtml_legend=1 00:05:29.419 --rc geninfo_all_blocks=1 00:05:29.419 --rc geninfo_unexecuted_blocks=1 00:05:29.419 00:05:29.419 ' 00:05:29.419 11:32:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.419 --rc genhtml_branch_coverage=1 00:05:29.419 --rc genhtml_function_coverage=1 00:05:29.419 --rc genhtml_legend=1 00:05:29.419 --rc geninfo_all_blocks=1 00:05:29.419 --rc geninfo_unexecuted_blocks=1 00:05:29.419 00:05:29.419 ' 00:05:29.419 11:32:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.419 --rc genhtml_branch_coverage=1 00:05:29.419 --rc genhtml_function_coverage=1 00:05:29.419 --rc genhtml_legend=1 00:05:29.419 --rc geninfo_all_blocks=1 00:05:29.419 --rc geninfo_unexecuted_blocks=1 00:05:29.419 00:05:29.419 ' 00:05:29.419 11:32:59 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.419 11:32:59 -- nvmf/common.sh@7 -- # uname -s 00:05:29.419 11:32:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.419 11:32:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.419 11:32:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.419 11:32:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.419 11:32:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.419 11:32:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.419 11:32:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.419 11:32:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.419 11:32:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.419 11:32:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.419 11:32:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:29.419 11:32:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:29.419 11:32:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.419 11:32:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.419 11:32:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.419 11:32:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:29.419 11:32:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.419 11:32:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.419 11:32:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.419 11:32:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.419 11:32:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.419 11:32:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.419 11:32:59 -- paths/export.sh@5 -- # export PATH 00:05:29.419 11:32:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.419 11:32:59 -- nvmf/common.sh@46 -- # : 0 00:05:29.419 11:32:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:29.419 11:32:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:29.419 11:32:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:29.419 11:32:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.419 11:32:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.419 11:32:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:29.419 11:32:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:29.419 11:32:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:29.419 11:32:59 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.419 11:32:59 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.419 11:32:59 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:29.419 11:32:59 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.419 11:32:59 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:29.419 11:32:59 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.419 11:32:59 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:29.419 11:32:59 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:29.419 11:32:59 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:29.419 11:32:59 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:29.419 11:32:59 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.419 11:32:59 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:29.419 INFO: JSON configuration test init 00:05:29.419 11:32:59 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:29.419 11:32:59 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:29.419 11:32:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.419 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.419 11:32:59 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:29.419 11:32:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.419 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.419 11:32:59 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.419 11:32:59 -- json_config/json_config.sh@98 -- # local app=target 00:05:29.419 11:32:59 -- json_config/json_config.sh@99 -- # shift 00:05:29.419 11:32:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:29.419 11:32:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:29.419 11:32:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=3568751 00:05:29.419 11:32:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:29.419 Waiting for target to run... 00:05:29.419 11:32:59 -- json_config/json_config.sh@114 -- # waitforlisten 3568751 /var/tmp/spdk_tgt.sock 00:05:29.419 11:32:59 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.419 11:32:59 -- common/autotest_common.sh@829 -- # '[' -z 3568751 ']' 00:05:29.419 11:32:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.419 11:32:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.419 11:32:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.419 11:32:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.419 11:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.419 [2024-12-03 11:32:59.993885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.419 [2024-12-03 11:32:59.993936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568751 ] 00:05:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.985 [2024-12-03 11:33:00.432379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.985 [2024-12-03 11:33:00.512466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.985 [2024-12-03 11:33:00.512578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.243 11:33:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.243 11:33:00 -- common/autotest_common.sh@862 -- # return 0 00:05:30.243 11:33:00 -- json_config/json_config.sh@115 -- # echo '' 00:05:30.243 00:05:30.243 11:33:00 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:30.243 11:33:00 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:30.243 11:33:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.243 11:33:00 -- common/autotest_common.sh@10 -- # set +x 00:05:30.243 11:33:00 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:30.243 11:33:00 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:30.243 11:33:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.243 11:33:00 -- common/autotest_common.sh@10 -- # set +x 00:05:30.243 11:33:00 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:30.243 11:33:00 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:30.243 11:33:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.526 11:33:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:33.526 11:33:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:33.526 11:33:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.526 11:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.526 11:33:03 -- json_config/json_config.sh@48 -- # local ret=0 00:05:33.526 11:33:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.526 11:33:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:33.526 11:33:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:33.526 11:33:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:33.526 11:33:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.783 11:33:04 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.783 11:33:04 -- json_config/json_config.sh@51 -- # local get_types 00:05:33.783 11:33:04 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:33.783 11:33:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.783 11:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.783 11:33:04 -- json_config/json_config.sh@58 -- # return 0 00:05:33.783 11:33:04 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:33.783 11:33:04 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:33.783 11:33:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.783 11:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.783 11:33:04 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.783 11:33:04 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:33.783 11:33:04 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:33.783 11:33:04 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:33.783 11:33:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:33.783 11:33:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:33.783 11:33:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:33.783 11:33:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:33.783 11:33:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:33.783 11:33:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.783 11:33:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:33.783 11:33:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.783 11:33:04 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:33.783 11:33:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:33.783 11:33:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:33.783 11:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.338 11:33:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:40.338 11:33:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:40.338 11:33:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:40.338 11:33:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:40.338 11:33:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:40.338 11:33:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:40.338 11:33:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:40.338 11:33:10 -- nvmf/common.sh@294 -- # net_devs=() 00:05:40.338 11:33:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:40.338 11:33:10 -- nvmf/common.sh@295 -- # e810=() 00:05:40.338 11:33:10 -- nvmf/common.sh@295 -- # local -ga e810 00:05:40.338 11:33:10 -- nvmf/common.sh@296 -- # x722=() 00:05:40.338 11:33:10 -- nvmf/common.sh@296 -- # local -ga x722 00:05:40.338 11:33:10 -- nvmf/common.sh@297 -- # mlx=() 00:05:40.338 11:33:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:40.338 11:33:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.338 11:33:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:40.338 11:33:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:40.338 11:33:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:40.338 11:33:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:40.338 11:33:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:40.338 11:33:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:40.338 11:33:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:40.338 11:33:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:40.338 11:33:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:40.338 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:40.338 11:33:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.339 11:33:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:40.339 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:40.339 11:33:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.339 11:33:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.339 11:33:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.339 11:33:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:40.339 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.339 11:33:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.339 11:33:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.339 11:33:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:40.339 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.339 11:33:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:40.339 11:33:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:40.339 11:33:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:40.339 11:33:10 -- nvmf/common.sh@57 -- # uname 00:05:40.339 11:33:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:40.339 11:33:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:40.339 11:33:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:40.339 11:33:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:40.339 11:33:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:40.339 11:33:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:40.339 11:33:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:40.339 11:33:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:40.339 11:33:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:40.339 11:33:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:40.339 11:33:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:40.339 11:33:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.339 11:33:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:40.339 11:33:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:40.339 11:33:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.339 11:33:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@104 -- # continue 2 00:05:40.339 11:33:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@104 -- # continue 2 00:05:40.339 11:33:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:40.339 11:33:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.339 11:33:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:40.339 11:33:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:40.339 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.339 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:40.339 altname enp217s0f0np0 00:05:40.339 altname ens818f0np0 00:05:40.339 inet 192.168.100.8/24 scope global mlx_0_0 00:05:40.339 valid_lft forever preferred_lft forever 00:05:40.339 11:33:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:40.339 11:33:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.339 11:33:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:40.339 11:33:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:40.339 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.339 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:40.339 altname enp217s0f1np1 00:05:40.339 altname ens818f1np1 00:05:40.339 inet 192.168.100.9/24 scope global mlx_0_1 00:05:40.339 valid_lft forever preferred_lft forever 00:05:40.339 11:33:10 -- nvmf/common.sh@410 -- # return 0 00:05:40.339 11:33:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:40.339 11:33:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:40.339 11:33:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:40.339 11:33:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:40.339 11:33:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.339 11:33:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:40.339 11:33:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:40.339 11:33:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.339 11:33:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:40.339 11:33:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@104 -- # continue 2 00:05:40.339 11:33:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.339 11:33:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.339 11:33:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@104 -- # continue 2 00:05:40.339 11:33:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:40.339 11:33:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.339 11:33:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:40.339 11:33:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.339 11:33:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.339 11:33:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:40.339 192.168.100.9' 00:05:40.339 11:33:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:40.339 192.168.100.9' 00:05:40.339 11:33:10 -- nvmf/common.sh@445 -- # head -n 1 00:05:40.339 11:33:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:40.339 11:33:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:40.339 192.168.100.9' 00:05:40.339 11:33:10 -- nvmf/common.sh@446 -- # tail -n +2 00:05:40.339 11:33:10 -- nvmf/common.sh@446 -- # head -n 1 00:05:40.339 11:33:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:40.339 11:33:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:40.339 11:33:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:40.339 11:33:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:40.339 11:33:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:40.339 11:33:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:40.598 11:33:10 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:40.598 11:33:10 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.598 11:33:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.598 MallocForNvmf0 00:05:40.598 11:33:11 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.598 11:33:11 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.856 MallocForNvmf1 00:05:40.856 11:33:11 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:40.856 11:33:11 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:40.856 [2024-12-03 11:33:11.462366] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:41.114 [2024-12-03 11:33:11.493995] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15b51f0/0x15c1ce0) succeed. 00:05:41.114 [2024-12-03 11:33:11.505586] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b73e0/0x1603380) succeed. 00:05:41.114 11:33:11 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.114 11:33:11 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.114 11:33:11 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.114 11:33:11 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.372 11:33:11 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.372 11:33:11 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.631 11:33:12 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:41.631 11:33:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:41.631 [2024-12-03 11:33:12.200990] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:41.631 11:33:12 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:41.631 11:33:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.631 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.889 11:33:12 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:41.889 11:33:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.889 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.889 11:33:12 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:41.889 11:33:12 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.889 11:33:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.889 MallocBdevForConfigChangeCheck 00:05:41.889 11:33:12 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:41.889 11:33:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.889 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.147 11:33:12 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:42.147 11:33:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.406 11:33:12 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:42.406 INFO: shutting down applications... 00:05:42.406 11:33:12 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:42.406 11:33:12 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:42.406 11:33:12 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:42.406 11:33:12 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:44.937 Calling clear_iscsi_subsystem 00:05:44.937 Calling clear_nvmf_subsystem 00:05:44.937 Calling clear_nbd_subsystem 00:05:44.937 Calling clear_ublk_subsystem 00:05:44.937 Calling clear_vhost_blk_subsystem 00:05:44.937 Calling clear_vhost_scsi_subsystem 00:05:44.937 Calling clear_scheduler_subsystem 00:05:44.937 Calling clear_bdev_subsystem 00:05:44.937 Calling clear_accel_subsystem 00:05:44.937 Calling clear_vmd_subsystem 00:05:44.937 Calling clear_sock_subsystem 00:05:44.937 Calling clear_iobuf_subsystem 00:05:44.937 11:33:15 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:44.937 11:33:15 -- json_config/json_config.sh@396 -- # count=100 00:05:44.937 11:33:15 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:44.938 11:33:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.938 11:33:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:44.938 11:33:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:45.195 11:33:15 -- json_config/json_config.sh@398 -- # break 00:05:45.195 11:33:15 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:45.195 11:33:15 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:45.195 11:33:15 -- json_config/json_config.sh@120 -- # local app=target 00:05:45.195 11:33:15 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:45.195 11:33:15 -- json_config/json_config.sh@124 -- # [[ -n 3568751 ]] 00:05:45.195 11:33:15 -- json_config/json_config.sh@127 -- # kill -SIGINT 3568751 00:05:45.195 11:33:15 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:45.195 11:33:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:45.195 11:33:15 -- json_config/json_config.sh@130 -- # kill -0 3568751 00:05:45.195 11:33:15 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:45.763 11:33:16 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:45.763 11:33:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:45.763 11:33:16 -- json_config/json_config.sh@130 -- # kill -0 3568751 00:05:45.763 11:33:16 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:45.763 11:33:16 -- json_config/json_config.sh@132 -- # break 00:05:45.763 11:33:16 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:45.763 11:33:16 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:45.763 SPDK target shutdown done 00:05:45.763 11:33:16 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:45.763 INFO: relaunching applications... 00:05:45.763 11:33:16 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.763 11:33:16 -- json_config/json_config.sh@98 -- # local app=target 00:05:45.763 11:33:16 -- json_config/json_config.sh@99 -- # shift 00:05:45.763 11:33:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:45.763 11:33:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:45.763 11:33:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:45.763 11:33:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.763 11:33:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.763 11:33:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=3574387 00:05:45.763 11:33:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:45.763 Waiting for target to run... 00:05:45.763 11:33:16 -- json_config/json_config.sh@114 -- # waitforlisten 3574387 /var/tmp/spdk_tgt.sock 00:05:45.764 11:33:16 -- common/autotest_common.sh@829 -- # '[' -z 3574387 ']' 00:05:45.764 11:33:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.764 11:33:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.764 11:33:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.764 11:33:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.764 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:05:45.764 11:33:16 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.764 [2024-12-03 11:33:16.232790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.764 [2024-12-03 11:33:16.232849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574387 ] 00:05:45.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.023 [2024-12-03 11:33:16.526432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.023 [2024-12-03 11:33:16.588013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.023 [2024-12-03 11:33:16.588135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.304 [2024-12-03 11:33:19.628297] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdf0970/0xdaedb0) succeed. 00:05:49.304 [2024-12-03 11:33:19.639436] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdf2b60/0xc5d1d0) succeed. 00:05:49.304 [2024-12-03 11:33:19.687403] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:49.892 11:33:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.892 11:33:20 -- common/autotest_common.sh@862 -- # return 0 00:05:49.892 11:33:20 -- json_config/json_config.sh@115 -- # echo '' 00:05:49.892 00:05:49.892 11:33:20 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:49.892 11:33:20 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:49.892 INFO: Checking if target configuration is the same... 00:05:49.892 11:33:20 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.892 11:33:20 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:49.892 11:33:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.892 + '[' 2 -ne 2 ']' 00:05:49.892 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:49.892 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:49.892 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:49.892 +++ basename /dev/fd/62 00:05:49.892 ++ mktemp /tmp/62.XXX 00:05:49.892 + tmp_file_1=/tmp/62.6sm 00:05:49.892 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.892 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:49.892 + tmp_file_2=/tmp/spdk_tgt_config.json.Elz 00:05:49.892 + ret=0 00:05:49.892 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.149 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.149 + diff -u /tmp/62.6sm /tmp/spdk_tgt_config.json.Elz 00:05:50.149 + echo 'INFO: JSON config files are the same' 00:05:50.149 INFO: JSON config files are the same 00:05:50.149 + rm /tmp/62.6sm /tmp/spdk_tgt_config.json.Elz 00:05:50.149 + exit 0 00:05:50.149 11:33:20 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:50.149 11:33:20 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.149 INFO: changing configuration and checking if this can be detected... 00:05:50.149 11:33:20 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.149 11:33:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.405 11:33:20 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.405 11:33:20 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:50.405 11:33:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.405 + '[' 2 -ne 2 ']' 00:05:50.405 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.405 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:50.405 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:50.405 +++ basename /dev/fd/62 00:05:50.405 ++ mktemp /tmp/62.XXX 00:05:50.405 + tmp_file_1=/tmp/62.qGA 00:05:50.405 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.405 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.405 + tmp_file_2=/tmp/spdk_tgt_config.json.NEZ 00:05:50.405 + ret=0 00:05:50.405 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.674 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.952 + diff -u /tmp/62.qGA /tmp/spdk_tgt_config.json.NEZ 00:05:50.952 + ret=1 00:05:50.952 + echo '=== Start of file: /tmp/62.qGA ===' 00:05:50.952 + cat /tmp/62.qGA 00:05:50.952 + echo '=== End of file: /tmp/62.qGA ===' 00:05:50.952 + echo '' 00:05:50.952 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NEZ ===' 00:05:50.952 + cat /tmp/spdk_tgt_config.json.NEZ 00:05:50.952 + echo '=== End of file: /tmp/spdk_tgt_config.json.NEZ ===' 00:05:50.952 + echo '' 00:05:50.952 + rm /tmp/62.qGA /tmp/spdk_tgt_config.json.NEZ 00:05:50.952 + exit 1 00:05:50.952 11:33:21 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:50.952 INFO: configuration change detected. 00:05:50.952 11:33:21 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:50.952 11:33:21 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:50.952 11:33:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.952 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 11:33:21 -- json_config/json_config.sh@360 -- # local ret=0 00:05:50.952 11:33:21 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:50.952 11:33:21 -- json_config/json_config.sh@370 -- # [[ -n 3574387 ]] 00:05:50.952 11:33:21 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:50.952 11:33:21 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:50.952 11:33:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.952 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 11:33:21 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:50.952 11:33:21 -- json_config/json_config.sh@246 -- # uname -s 00:05:50.952 11:33:21 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:50.952 11:33:21 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:50.952 11:33:21 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:50.952 11:33:21 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:50.952 11:33:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.952 11:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 11:33:21 -- json_config/json_config.sh@376 -- # killprocess 3574387 00:05:50.952 11:33:21 -- common/autotest_common.sh@936 -- # '[' -z 3574387 ']' 00:05:50.952 11:33:21 -- common/autotest_common.sh@940 -- # kill -0 3574387 00:05:50.952 11:33:21 -- common/autotest_common.sh@941 -- # uname 00:05:50.952 11:33:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.952 11:33:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3574387 00:05:50.952 11:33:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.952 11:33:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.952 11:33:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3574387' 00:05:50.952 killing process with pid 3574387 00:05:50.952 11:33:21 -- common/autotest_common.sh@955 -- # kill 3574387 00:05:50.952 11:33:21 -- common/autotest_common.sh@960 -- # wait 3574387 00:05:53.513 11:33:23 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.513 11:33:23 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:53.513 11:33:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.513 11:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.513 11:33:24 -- json_config/json_config.sh@381 -- # return 0 00:05:53.513 11:33:24 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:53.513 INFO: Success 00:05:53.513 11:33:24 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:53.513 11:33:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:53.513 11:33:24 -- nvmf/common.sh@116 -- # sync 00:05:53.513 11:33:24 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:53.513 11:33:24 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:53.513 11:33:24 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:53.513 11:33:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:53.513 11:33:24 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:53.513 00:05:53.513 real 0m24.282s 00:05:53.513 user 0m27.144s 00:05:53.513 sys 0m7.465s 00:05:53.513 11:33:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.513 11:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.513 ************************************ 00:05:53.513 END TEST json_config 00:05:53.513 ************************************ 00:05:53.513 11:33:24 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.513 11:33:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.513 11:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.513 11:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.513 ************************************ 00:05:53.513 START TEST json_config_extra_key 00:05:53.514 ************************************ 00:05:53.514 11:33:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.514 11:33:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.514 11:33:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.514 11:33:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.773 11:33:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.773 11:33:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.773 11:33:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.773 11:33:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.773 11:33:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.773 11:33:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.773 11:33:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.773 11:33:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.773 11:33:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.773 11:33:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.773 11:33:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.773 11:33:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.773 11:33:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.773 11:33:24 -- scripts/common.sh@344 -- # : 1 00:05:53.773 11:33:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.773 11:33:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.773 11:33:24 -- scripts/common.sh@364 -- # decimal 1 00:05:53.773 11:33:24 -- scripts/common.sh@352 -- # local d=1 00:05:53.773 11:33:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.773 11:33:24 -- scripts/common.sh@354 -- # echo 1 00:05:53.773 11:33:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.773 11:33:24 -- scripts/common.sh@365 -- # decimal 2 00:05:53.773 11:33:24 -- scripts/common.sh@352 -- # local d=2 00:05:53.773 11:33:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.773 11:33:24 -- scripts/common.sh@354 -- # echo 2 00:05:53.773 11:33:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.773 11:33:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.773 11:33:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.773 11:33:24 -- scripts/common.sh@367 -- # return 0 00:05:53.773 11:33:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.773 11:33:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.773 --rc genhtml_branch_coverage=1 00:05:53.773 --rc genhtml_function_coverage=1 00:05:53.773 --rc genhtml_legend=1 00:05:53.773 --rc geninfo_all_blocks=1 00:05:53.773 --rc geninfo_unexecuted_blocks=1 00:05:53.773 00:05:53.773 ' 00:05:53.773 11:33:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.773 --rc genhtml_branch_coverage=1 00:05:53.773 --rc genhtml_function_coverage=1 00:05:53.773 --rc genhtml_legend=1 00:05:53.773 --rc geninfo_all_blocks=1 00:05:53.773 --rc geninfo_unexecuted_blocks=1 00:05:53.773 00:05:53.773 ' 00:05:53.773 11:33:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.773 --rc genhtml_branch_coverage=1 00:05:53.773 --rc genhtml_function_coverage=1 00:05:53.773 --rc genhtml_legend=1 00:05:53.773 --rc geninfo_all_blocks=1 00:05:53.773 --rc geninfo_unexecuted_blocks=1 00:05:53.773 00:05:53.773 ' 00:05:53.773 11:33:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.773 --rc genhtml_branch_coverage=1 00:05:53.773 --rc genhtml_function_coverage=1 00:05:53.773 --rc genhtml_legend=1 00:05:53.773 --rc geninfo_all_blocks=1 00:05:53.773 --rc geninfo_unexecuted_blocks=1 00:05:53.773 00:05:53.773 ' 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.773 11:33:24 -- nvmf/common.sh@7 -- # uname -s 00:05:53.773 11:33:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.773 11:33:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.773 11:33:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.773 11:33:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.773 11:33:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.773 11:33:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.773 11:33:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.773 11:33:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.773 11:33:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.773 11:33:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.773 11:33:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:53.773 11:33:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:53.773 11:33:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.773 11:33:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.773 11:33:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.773 11:33:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.773 11:33:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.773 11:33:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.773 11:33:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.773 11:33:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.773 11:33:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.773 11:33:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.773 11:33:24 -- paths/export.sh@5 -- # export PATH 00:05:53.773 11:33:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.773 11:33:24 -- nvmf/common.sh@46 -- # : 0 00:05:53.773 11:33:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:53.773 11:33:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:53.773 11:33:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:53.773 11:33:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.773 11:33:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.773 11:33:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:53.773 11:33:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:53.773 11:33:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:53.773 INFO: launching applications... 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:53.773 11:33:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3575881 00:05:53.774 11:33:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:53.774 Waiting for target to run... 00:05:53.774 11:33:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3575881 /var/tmp/spdk_tgt.sock 00:05:53.774 11:33:24 -- common/autotest_common.sh@829 -- # '[' -z 3575881 ']' 00:05:53.774 11:33:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.774 11:33:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.774 11:33:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.774 11:33:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.774 11:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.774 11:33:24 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.774 [2024-12-03 11:33:24.270643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.774 [2024-12-03 11:33:24.270704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575881 ] 00:05:53.774 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.031 [2024-12-03 11:33:24.555949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.031 [2024-12-03 11:33:24.621068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.031 [2024-12-03 11:33:24.621195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.597 11:33:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.597 11:33:25 -- common/autotest_common.sh@862 -- # return 0 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:54.597 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:54.597 INFO: shutting down applications... 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3575881 ]] 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3575881 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3575881 00:05:54.597 11:33:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3575881 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:55.164 SPDK target shutdown done 00:05:55.164 11:33:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:55.164 Success 00:05:55.164 00:05:55.164 real 0m1.488s 00:05:55.164 user 0m1.240s 00:05:55.164 sys 0m0.403s 00:05:55.164 11:33:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.164 11:33:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 END TEST json_config_extra_key 00:05:55.164 ************************************ 00:05:55.164 11:33:25 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.164 11:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.164 11:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.164 11:33:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 START TEST alias_rpc 00:05:55.164 ************************************ 00:05:55.164 11:33:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.164 * Looking for test storage... 00:05:55.164 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:55.164 11:33:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.164 11:33:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.164 11:33:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.164 11:33:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.164 11:33:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.164 11:33:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.164 11:33:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.164 11:33:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.164 11:33:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.164 11:33:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.164 11:33:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.164 11:33:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.164 11:33:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.164 11:33:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.164 11:33:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.164 11:33:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.164 11:33:25 -- scripts/common.sh@344 -- # : 1 00:05:55.164 11:33:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.164 11:33:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.164 11:33:25 -- scripts/common.sh@364 -- # decimal 1 00:05:55.164 11:33:25 -- scripts/common.sh@352 -- # local d=1 00:05:55.164 11:33:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.164 11:33:25 -- scripts/common.sh@354 -- # echo 1 00:05:55.422 11:33:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.422 11:33:25 -- scripts/common.sh@365 -- # decimal 2 00:05:55.422 11:33:25 -- scripts/common.sh@352 -- # local d=2 00:05:55.422 11:33:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.422 11:33:25 -- scripts/common.sh@354 -- # echo 2 00:05:55.422 11:33:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.422 11:33:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.422 11:33:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.422 11:33:25 -- scripts/common.sh@367 -- # return 0 00:05:55.422 11:33:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.422 11:33:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.422 --rc genhtml_branch_coverage=1 00:05:55.422 --rc genhtml_function_coverage=1 00:05:55.422 --rc genhtml_legend=1 00:05:55.422 --rc geninfo_all_blocks=1 00:05:55.422 --rc geninfo_unexecuted_blocks=1 00:05:55.422 00:05:55.422 ' 00:05:55.422 11:33:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.422 --rc genhtml_branch_coverage=1 00:05:55.422 --rc genhtml_function_coverage=1 00:05:55.422 --rc genhtml_legend=1 00:05:55.422 --rc geninfo_all_blocks=1 00:05:55.422 --rc geninfo_unexecuted_blocks=1 00:05:55.422 00:05:55.422 ' 00:05:55.422 11:33:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.422 --rc genhtml_branch_coverage=1 00:05:55.422 --rc genhtml_function_coverage=1 00:05:55.422 --rc genhtml_legend=1 00:05:55.422 --rc geninfo_all_blocks=1 00:05:55.422 --rc geninfo_unexecuted_blocks=1 00:05:55.422 00:05:55.422 ' 00:05:55.422 11:33:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.422 --rc genhtml_branch_coverage=1 00:05:55.422 --rc genhtml_function_coverage=1 00:05:55.422 --rc genhtml_legend=1 00:05:55.422 --rc geninfo_all_blocks=1 00:05:55.422 --rc geninfo_unexecuted_blocks=1 00:05:55.422 00:05:55.422 ' 00:05:55.422 11:33:25 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.422 11:33:25 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3576209 00:05:55.422 11:33:25 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3576209 00:05:55.423 11:33:25 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.423 11:33:25 -- common/autotest_common.sh@829 -- # '[' -z 3576209 ']' 00:05:55.423 11:33:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.423 11:33:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.423 11:33:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.423 11:33:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.423 11:33:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.423 [2024-12-03 11:33:25.837294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.423 [2024-12-03 11:33:25.837347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576209 ] 00:05:55.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.423 [2024-12-03 11:33:25.905201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.423 [2024-12-03 11:33:25.972567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.423 [2024-12-03 11:33:25.972709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.358 11:33:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.358 11:33:26 -- common/autotest_common.sh@862 -- # return 0 00:05:56.358 11:33:26 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:56.358 11:33:26 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3576209 00:05:56.358 11:33:26 -- common/autotest_common.sh@936 -- # '[' -z 3576209 ']' 00:05:56.358 11:33:26 -- common/autotest_common.sh@940 -- # kill -0 3576209 00:05:56.358 11:33:26 -- common/autotest_common.sh@941 -- # uname 00:05:56.358 11:33:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.358 11:33:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3576209 00:05:56.358 11:33:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.358 11:33:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.358 11:33:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3576209' 00:05:56.358 killing process with pid 3576209 00:05:56.358 11:33:26 -- common/autotest_common.sh@955 -- # kill 3576209 00:05:56.358 11:33:26 -- common/autotest_common.sh@960 -- # wait 3576209 00:05:56.926 00:05:56.926 real 0m1.641s 00:05:56.926 user 0m1.709s 00:05:56.926 sys 0m0.500s 00:05:56.926 11:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.926 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.926 ************************************ 00:05:56.926 END TEST alias_rpc 00:05:56.926 ************************************ 00:05:56.926 11:33:27 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:56.926 11:33:27 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.926 11:33:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.926 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.926 ************************************ 00:05:56.926 START TEST spdkcli_tcp 00:05:56.926 ************************************ 00:05:56.926 11:33:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.926 * Looking for test storage... 00:05:56.926 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:56.926 11:33:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.926 11:33:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.926 11:33:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.926 11:33:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.926 11:33:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.926 11:33:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.926 11:33:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.926 11:33:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.926 11:33:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.926 11:33:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.926 11:33:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.926 11:33:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.926 11:33:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.926 11:33:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.926 11:33:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.926 11:33:27 -- scripts/common.sh@344 -- # : 1 00:05:56.926 11:33:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.926 11:33:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.926 11:33:27 -- scripts/common.sh@364 -- # decimal 1 00:05:56.926 11:33:27 -- scripts/common.sh@352 -- # local d=1 00:05:56.926 11:33:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.926 11:33:27 -- scripts/common.sh@354 -- # echo 1 00:05:56.926 11:33:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.926 11:33:27 -- scripts/common.sh@365 -- # decimal 2 00:05:56.926 11:33:27 -- scripts/common.sh@352 -- # local d=2 00:05:56.926 11:33:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.926 11:33:27 -- scripts/common.sh@354 -- # echo 2 00:05:56.926 11:33:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.926 11:33:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.926 11:33:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.926 11:33:27 -- scripts/common.sh@367 -- # return 0 00:05:56.926 11:33:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.926 --rc genhtml_branch_coverage=1 00:05:56.926 --rc genhtml_function_coverage=1 00:05:56.926 --rc genhtml_legend=1 00:05:56.926 --rc geninfo_all_blocks=1 00:05:56.926 --rc geninfo_unexecuted_blocks=1 00:05:56.926 00:05:56.926 ' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.926 --rc genhtml_branch_coverage=1 00:05:56.926 --rc genhtml_function_coverage=1 00:05:56.926 --rc genhtml_legend=1 00:05:56.926 --rc geninfo_all_blocks=1 00:05:56.926 --rc geninfo_unexecuted_blocks=1 00:05:56.926 00:05:56.926 ' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.926 --rc genhtml_branch_coverage=1 00:05:56.926 --rc genhtml_function_coverage=1 00:05:56.926 --rc genhtml_legend=1 00:05:56.926 --rc geninfo_all_blocks=1 00:05:56.926 --rc geninfo_unexecuted_blocks=1 00:05:56.926 00:05:56.926 ' 00:05:56.926 11:33:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.926 --rc genhtml_branch_coverage=1 00:05:56.926 --rc genhtml_function_coverage=1 00:05:56.926 --rc genhtml_legend=1 00:05:56.926 --rc geninfo_all_blocks=1 00:05:56.926 --rc geninfo_unexecuted_blocks=1 00:05:56.926 00:05:56.926 ' 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:56.926 11:33:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:56.926 11:33:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:56.926 11:33:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.926 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3576536 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@27 -- # waitforlisten 3576536 00:05:56.926 11:33:27 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:56.926 11:33:27 -- common/autotest_common.sh@829 -- # '[' -z 3576536 ']' 00:05:56.926 11:33:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.926 11:33:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.926 11:33:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.926 11:33:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.926 11:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.926 [2024-12-03 11:33:27.530635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.926 [2024-12-03 11:33:27.530686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576536 ] 00:05:57.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.185 [2024-12-03 11:33:27.599864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.185 [2024-12-03 11:33:27.672374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.185 [2024-12-03 11:33:27.672528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.185 [2024-12-03 11:33:27.672530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.751 11:33:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.751 11:33:28 -- common/autotest_common.sh@862 -- # return 0 00:05:57.751 11:33:28 -- spdkcli/tcp.sh@31 -- # socat_pid=3576805 00:05:57.751 11:33:28 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:57.751 11:33:28 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.010 [ 00:05:58.010 "bdev_malloc_delete", 00:05:58.010 "bdev_malloc_create", 00:05:58.010 "bdev_null_resize", 00:05:58.010 "bdev_null_delete", 00:05:58.010 "bdev_null_create", 00:05:58.010 "bdev_nvme_cuse_unregister", 00:05:58.010 "bdev_nvme_cuse_register", 00:05:58.010 "bdev_opal_new_user", 00:05:58.010 "bdev_opal_set_lock_state", 00:05:58.010 "bdev_opal_delete", 00:05:58.010 "bdev_opal_get_info", 00:05:58.010 "bdev_opal_create", 00:05:58.010 "bdev_nvme_opal_revert", 00:05:58.010 "bdev_nvme_opal_init", 00:05:58.010 "bdev_nvme_send_cmd", 00:05:58.010 "bdev_nvme_get_path_iostat", 00:05:58.010 "bdev_nvme_get_mdns_discovery_info", 00:05:58.010 "bdev_nvme_stop_mdns_discovery", 00:05:58.010 "bdev_nvme_start_mdns_discovery", 00:05:58.010 "bdev_nvme_set_multipath_policy", 00:05:58.010 "bdev_nvme_set_preferred_path", 00:05:58.010 "bdev_nvme_get_io_paths", 00:05:58.010 "bdev_nvme_remove_error_injection", 00:05:58.010 "bdev_nvme_add_error_injection", 00:05:58.010 "bdev_nvme_get_discovery_info", 00:05:58.010 "bdev_nvme_stop_discovery", 00:05:58.010 "bdev_nvme_start_discovery", 00:05:58.010 "bdev_nvme_get_controller_health_info", 00:05:58.010 "bdev_nvme_disable_controller", 00:05:58.010 "bdev_nvme_enable_controller", 00:05:58.010 "bdev_nvme_reset_controller", 00:05:58.010 "bdev_nvme_get_transport_statistics", 00:05:58.010 "bdev_nvme_apply_firmware", 00:05:58.010 "bdev_nvme_detach_controller", 00:05:58.010 "bdev_nvme_get_controllers", 00:05:58.010 "bdev_nvme_attach_controller", 00:05:58.010 "bdev_nvme_set_hotplug", 00:05:58.010 "bdev_nvme_set_options", 00:05:58.010 "bdev_passthru_delete", 00:05:58.010 "bdev_passthru_create", 00:05:58.010 "bdev_lvol_grow_lvstore", 00:05:58.010 "bdev_lvol_get_lvols", 00:05:58.010 "bdev_lvol_get_lvstores", 00:05:58.010 "bdev_lvol_delete", 00:05:58.010 "bdev_lvol_set_read_only", 00:05:58.010 "bdev_lvol_resize", 00:05:58.010 "bdev_lvol_decouple_parent", 00:05:58.010 "bdev_lvol_inflate", 00:05:58.010 "bdev_lvol_rename", 00:05:58.010 "bdev_lvol_clone_bdev", 00:05:58.010 "bdev_lvol_clone", 00:05:58.010 "bdev_lvol_snapshot", 00:05:58.010 "bdev_lvol_create", 00:05:58.010 "bdev_lvol_delete_lvstore", 00:05:58.010 "bdev_lvol_rename_lvstore", 00:05:58.010 "bdev_lvol_create_lvstore", 00:05:58.010 "bdev_raid_set_options", 00:05:58.010 "bdev_raid_remove_base_bdev", 00:05:58.010 "bdev_raid_add_base_bdev", 00:05:58.010 "bdev_raid_delete", 00:05:58.010 "bdev_raid_create", 00:05:58.010 "bdev_raid_get_bdevs", 00:05:58.010 "bdev_error_inject_error", 00:05:58.010 "bdev_error_delete", 00:05:58.010 "bdev_error_create", 00:05:58.010 "bdev_split_delete", 00:05:58.010 "bdev_split_create", 00:05:58.010 "bdev_delay_delete", 00:05:58.010 "bdev_delay_create", 00:05:58.010 "bdev_delay_update_latency", 00:05:58.010 "bdev_zone_block_delete", 00:05:58.010 "bdev_zone_block_create", 00:05:58.010 "blobfs_create", 00:05:58.010 "blobfs_detect", 00:05:58.010 "blobfs_set_cache_size", 00:05:58.010 "bdev_aio_delete", 00:05:58.010 "bdev_aio_rescan", 00:05:58.010 "bdev_aio_create", 00:05:58.010 "bdev_ftl_set_property", 00:05:58.010 "bdev_ftl_get_properties", 00:05:58.010 "bdev_ftl_get_stats", 00:05:58.010 "bdev_ftl_unmap", 00:05:58.010 "bdev_ftl_unload", 00:05:58.010 "bdev_ftl_delete", 00:05:58.010 "bdev_ftl_load", 00:05:58.010 "bdev_ftl_create", 00:05:58.010 "bdev_virtio_attach_controller", 00:05:58.010 "bdev_virtio_scsi_get_devices", 00:05:58.010 "bdev_virtio_detach_controller", 00:05:58.010 "bdev_virtio_blk_set_hotplug", 00:05:58.010 "bdev_iscsi_delete", 00:05:58.010 "bdev_iscsi_create", 00:05:58.010 "bdev_iscsi_set_options", 00:05:58.010 "accel_error_inject_error", 00:05:58.010 "ioat_scan_accel_module", 00:05:58.010 "dsa_scan_accel_module", 00:05:58.010 "iaa_scan_accel_module", 00:05:58.010 "iscsi_set_options", 00:05:58.010 "iscsi_get_auth_groups", 00:05:58.010 "iscsi_auth_group_remove_secret", 00:05:58.010 "iscsi_auth_group_add_secret", 00:05:58.010 "iscsi_delete_auth_group", 00:05:58.011 "iscsi_create_auth_group", 00:05:58.011 "iscsi_set_discovery_auth", 00:05:58.011 "iscsi_get_options", 00:05:58.011 "iscsi_target_node_request_logout", 00:05:58.011 "iscsi_target_node_set_redirect", 00:05:58.011 "iscsi_target_node_set_auth", 00:05:58.011 "iscsi_target_node_add_lun", 00:05:58.011 "iscsi_get_connections", 00:05:58.011 "iscsi_portal_group_set_auth", 00:05:58.011 "iscsi_start_portal_group", 00:05:58.011 "iscsi_delete_portal_group", 00:05:58.011 "iscsi_create_portal_group", 00:05:58.011 "iscsi_get_portal_groups", 00:05:58.011 "iscsi_delete_target_node", 00:05:58.011 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.011 "iscsi_target_node_add_pg_ig_maps", 00:05:58.011 "iscsi_create_target_node", 00:05:58.011 "iscsi_get_target_nodes", 00:05:58.011 "iscsi_delete_initiator_group", 00:05:58.011 "iscsi_initiator_group_remove_initiators", 00:05:58.011 "iscsi_initiator_group_add_initiators", 00:05:58.011 "iscsi_create_initiator_group", 00:05:58.011 "iscsi_get_initiator_groups", 00:05:58.011 "nvmf_set_crdt", 00:05:58.011 "nvmf_set_config", 00:05:58.011 "nvmf_set_max_subsystems", 00:05:58.011 "nvmf_subsystem_get_listeners", 00:05:58.011 "nvmf_subsystem_get_qpairs", 00:05:58.011 "nvmf_subsystem_get_controllers", 00:05:58.011 "nvmf_get_stats", 00:05:58.011 "nvmf_get_transports", 00:05:58.011 "nvmf_create_transport", 00:05:58.011 "nvmf_get_targets", 00:05:58.011 "nvmf_delete_target", 00:05:58.011 "nvmf_create_target", 00:05:58.011 "nvmf_subsystem_allow_any_host", 00:05:58.011 "nvmf_subsystem_remove_host", 00:05:58.011 "nvmf_subsystem_add_host", 00:05:58.011 "nvmf_subsystem_remove_ns", 00:05:58.011 "nvmf_subsystem_add_ns", 00:05:58.011 "nvmf_subsystem_listener_set_ana_state", 00:05:58.011 "nvmf_discovery_get_referrals", 00:05:58.011 "nvmf_discovery_remove_referral", 00:05:58.011 "nvmf_discovery_add_referral", 00:05:58.011 "nvmf_subsystem_remove_listener", 00:05:58.011 "nvmf_subsystem_add_listener", 00:05:58.011 "nvmf_delete_subsystem", 00:05:58.011 "nvmf_create_subsystem", 00:05:58.011 "nvmf_get_subsystems", 00:05:58.011 "env_dpdk_get_mem_stats", 00:05:58.011 "nbd_get_disks", 00:05:58.011 "nbd_stop_disk", 00:05:58.011 "nbd_start_disk", 00:05:58.011 "ublk_recover_disk", 00:05:58.011 "ublk_get_disks", 00:05:58.011 "ublk_stop_disk", 00:05:58.011 "ublk_start_disk", 00:05:58.011 "ublk_destroy_target", 00:05:58.011 "ublk_create_target", 00:05:58.011 "virtio_blk_create_transport", 00:05:58.011 "virtio_blk_get_transports", 00:05:58.011 "vhost_controller_set_coalescing", 00:05:58.011 "vhost_get_controllers", 00:05:58.011 "vhost_delete_controller", 00:05:58.011 "vhost_create_blk_controller", 00:05:58.011 "vhost_scsi_controller_remove_target", 00:05:58.011 "vhost_scsi_controller_add_target", 00:05:58.011 "vhost_start_scsi_controller", 00:05:58.011 "vhost_create_scsi_controller", 00:05:58.011 "thread_set_cpumask", 00:05:58.011 "framework_get_scheduler", 00:05:58.011 "framework_set_scheduler", 00:05:58.011 "framework_get_reactors", 00:05:58.011 "thread_get_io_channels", 00:05:58.011 "thread_get_pollers", 00:05:58.011 "thread_get_stats", 00:05:58.011 "framework_monitor_context_switch", 00:05:58.011 "spdk_kill_instance", 00:05:58.011 "log_enable_timestamps", 00:05:58.011 "log_get_flags", 00:05:58.011 "log_clear_flag", 00:05:58.011 "log_set_flag", 00:05:58.011 "log_get_level", 00:05:58.011 "log_set_level", 00:05:58.011 "log_get_print_level", 00:05:58.011 "log_set_print_level", 00:05:58.011 "framework_enable_cpumask_locks", 00:05:58.011 "framework_disable_cpumask_locks", 00:05:58.011 "framework_wait_init", 00:05:58.011 "framework_start_init", 00:05:58.011 "scsi_get_devices", 00:05:58.011 "bdev_get_histogram", 00:05:58.011 "bdev_enable_histogram", 00:05:58.011 "bdev_set_qos_limit", 00:05:58.011 "bdev_set_qd_sampling_period", 00:05:58.011 "bdev_get_bdevs", 00:05:58.011 "bdev_reset_iostat", 00:05:58.011 "bdev_get_iostat", 00:05:58.011 "bdev_examine", 00:05:58.011 "bdev_wait_for_examine", 00:05:58.011 "bdev_set_options", 00:05:58.011 "notify_get_notifications", 00:05:58.011 "notify_get_types", 00:05:58.011 "accel_get_stats", 00:05:58.011 "accel_set_options", 00:05:58.011 "accel_set_driver", 00:05:58.011 "accel_crypto_key_destroy", 00:05:58.011 "accel_crypto_keys_get", 00:05:58.011 "accel_crypto_key_create", 00:05:58.011 "accel_assign_opc", 00:05:58.011 "accel_get_module_info", 00:05:58.011 "accel_get_opc_assignments", 00:05:58.011 "vmd_rescan", 00:05:58.011 "vmd_remove_device", 00:05:58.011 "vmd_enable", 00:05:58.011 "sock_set_default_impl", 00:05:58.011 "sock_impl_set_options", 00:05:58.011 "sock_impl_get_options", 00:05:58.011 "iobuf_get_stats", 00:05:58.011 "iobuf_set_options", 00:05:58.011 "framework_get_pci_devices", 00:05:58.011 "framework_get_config", 00:05:58.011 "framework_get_subsystems", 00:05:58.011 "trace_get_info", 00:05:58.011 "trace_get_tpoint_group_mask", 00:05:58.011 "trace_disable_tpoint_group", 00:05:58.011 "trace_enable_tpoint_group", 00:05:58.011 "trace_clear_tpoint_mask", 00:05:58.011 "trace_set_tpoint_mask", 00:05:58.011 "spdk_get_version", 00:05:58.011 "rpc_get_methods" 00:05:58.011 ] 00:05:58.011 11:33:28 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.011 11:33:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.011 11:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.011 11:33:28 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.011 11:33:28 -- spdkcli/tcp.sh@38 -- # killprocess 3576536 00:05:58.011 11:33:28 -- common/autotest_common.sh@936 -- # '[' -z 3576536 ']' 00:05:58.011 11:33:28 -- common/autotest_common.sh@940 -- # kill -0 3576536 00:05:58.011 11:33:28 -- common/autotest_common.sh@941 -- # uname 00:05:58.011 11:33:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.011 11:33:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3576536 00:05:58.011 11:33:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.011 11:33:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.011 11:33:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3576536' 00:05:58.011 killing process with pid 3576536 00:05:58.011 11:33:28 -- common/autotest_common.sh@955 -- # kill 3576536 00:05:58.011 11:33:28 -- common/autotest_common.sh@960 -- # wait 3576536 00:05:58.580 00:05:58.580 real 0m1.651s 00:05:58.580 user 0m2.926s 00:05:58.580 sys 0m0.500s 00:05:58.580 11:33:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.580 11:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.580 ************************************ 00:05:58.580 END TEST spdkcli_tcp 00:05:58.580 ************************************ 00:05:58.580 11:33:28 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.580 11:33:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.580 11:33:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.580 11:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.580 ************************************ 00:05:58.580 START TEST dpdk_mem_utility 00:05:58.580 ************************************ 00:05:58.580 11:33:28 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.580 * Looking for test storage... 00:05:58.580 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:58.580 11:33:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.580 11:33:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.580 11:33:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.580 11:33:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.580 11:33:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.580 11:33:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.580 11:33:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.580 11:33:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.580 11:33:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.580 11:33:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.580 11:33:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.580 11:33:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.580 11:33:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.580 11:33:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.580 11:33:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.580 11:33:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.580 11:33:29 -- scripts/common.sh@344 -- # : 1 00:05:58.580 11:33:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.580 11:33:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.580 11:33:29 -- scripts/common.sh@364 -- # decimal 1 00:05:58.580 11:33:29 -- scripts/common.sh@352 -- # local d=1 00:05:58.580 11:33:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.580 11:33:29 -- scripts/common.sh@354 -- # echo 1 00:05:58.580 11:33:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.580 11:33:29 -- scripts/common.sh@365 -- # decimal 2 00:05:58.580 11:33:29 -- scripts/common.sh@352 -- # local d=2 00:05:58.580 11:33:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.580 11:33:29 -- scripts/common.sh@354 -- # echo 2 00:05:58.580 11:33:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.580 11:33:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.580 11:33:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.580 11:33:29 -- scripts/common.sh@367 -- # return 0 00:05:58.580 11:33:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.580 11:33:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.580 --rc genhtml_branch_coverage=1 00:05:58.580 --rc genhtml_function_coverage=1 00:05:58.580 --rc genhtml_legend=1 00:05:58.580 --rc geninfo_all_blocks=1 00:05:58.580 --rc geninfo_unexecuted_blocks=1 00:05:58.580 00:05:58.580 ' 00:05:58.580 11:33:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.580 --rc genhtml_branch_coverage=1 00:05:58.580 --rc genhtml_function_coverage=1 00:05:58.580 --rc genhtml_legend=1 00:05:58.580 --rc geninfo_all_blocks=1 00:05:58.580 --rc geninfo_unexecuted_blocks=1 00:05:58.580 00:05:58.581 ' 00:05:58.581 11:33:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.581 --rc genhtml_branch_coverage=1 00:05:58.581 --rc genhtml_function_coverage=1 00:05:58.581 --rc genhtml_legend=1 00:05:58.581 --rc geninfo_all_blocks=1 00:05:58.581 --rc geninfo_unexecuted_blocks=1 00:05:58.581 00:05:58.581 ' 00:05:58.581 11:33:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.581 --rc genhtml_branch_coverage=1 00:05:58.581 --rc genhtml_function_coverage=1 00:05:58.581 --rc genhtml_legend=1 00:05:58.581 --rc geninfo_all_blocks=1 00:05:58.581 --rc geninfo_unexecuted_blocks=1 00:05:58.581 00:05:58.581 ' 00:05:58.581 11:33:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.581 11:33:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.581 11:33:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3576939 00:05:58.581 11:33:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3576939 00:05:58.581 11:33:29 -- common/autotest_common.sh@829 -- # '[' -z 3576939 ']' 00:05:58.581 11:33:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.581 11:33:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.581 11:33:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.581 11:33:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.581 11:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:58.840 [2024-12-03 11:33:29.212199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.840 [2024-12-03 11:33:29.212258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576939 ] 00:05:58.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.840 [2024-12-03 11:33:29.276366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.840 [2024-12-03 11:33:29.349645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.840 [2024-12-03 11:33:29.349757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.776 11:33:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.776 11:33:30 -- common/autotest_common.sh@862 -- # return 0 00:05:59.776 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.776 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.776 11:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.776 11:33:30 -- common/autotest_common.sh@10 -- # set +x 00:05:59.776 { 00:05:59.776 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.776 } 00:05:59.776 11:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.776 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.776 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:59.776 1 heaps totaling size 814.000000 MiB 00:05:59.776 size: 814.000000 MiB heap id: 0 00:05:59.776 end heaps---------- 00:05:59.776 8 mempools totaling size 598.116089 MiB 00:05:59.776 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.776 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.776 size: 84.521057 MiB name: bdev_io_3576939 00:05:59.776 size: 51.011292 MiB name: evtpool_3576939 00:05:59.776 size: 50.003479 MiB name: msgpool_3576939 00:05:59.776 size: 21.763794 MiB name: PDU_Pool 00:05:59.776 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.776 size: 0.026123 MiB name: Session_Pool 00:05:59.776 end mempools------- 00:05:59.776 6 memzones totaling size 4.142822 MiB 00:05:59.776 size: 1.000366 MiB name: RG_ring_0_3576939 00:05:59.776 size: 1.000366 MiB name: RG_ring_1_3576939 00:05:59.776 size: 1.000366 MiB name: RG_ring_4_3576939 00:05:59.776 size: 1.000366 MiB name: RG_ring_5_3576939 00:05:59.776 size: 0.125366 MiB name: RG_ring_2_3576939 00:05:59.776 size: 0.015991 MiB name: RG_ring_3_3576939 00:05:59.776 end memzones------- 00:05:59.776 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.776 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:59.776 list of free elements. size: 12.519348 MiB 00:05:59.776 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:59.776 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:59.776 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:59.776 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:59.776 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:59.776 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:59.776 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:59.776 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:59.776 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:59.776 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:59.776 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:59.776 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:59.776 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:59.776 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:59.776 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:59.776 list of standard malloc elements. size: 199.218079 MiB 00:05:59.776 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:59.776 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:59.776 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:59.776 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:59.776 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:59.776 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:59.776 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:59.776 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:59.776 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:59.776 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:59.776 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:59.776 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:59.776 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:59.776 list of memzone associated elements. size: 602.262573 MiB 00:05:59.776 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:59.776 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.776 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:59.776 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.776 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:59.776 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3576939_0 00:05:59.776 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:59.776 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3576939_0 00:05:59.776 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:59.776 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3576939_0 00:05:59.776 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:59.776 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.776 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:59.776 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.776 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:59.776 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3576939 00:05:59.776 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:59.776 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3576939 00:05:59.776 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:59.776 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3576939 00:05:59.776 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:59.776 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.776 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:59.776 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.776 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:59.776 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.776 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:59.776 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.776 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:59.776 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3576939 00:05:59.776 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:59.776 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3576939 00:05:59.776 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:59.776 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3576939 00:05:59.776 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:59.776 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3576939 00:05:59.776 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:59.776 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3576939 00:05:59.776 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:59.776 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.776 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:59.777 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.777 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:59.777 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.777 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:59.777 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3576939 00:05:59.777 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:59.777 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.777 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:59.777 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.777 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:59.777 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3576939 00:05:59.777 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:59.777 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.777 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:59.777 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3576939 00:05:59.777 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:59.777 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3576939 00:05:59.777 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:59.777 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.777 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.777 11:33:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3576939 00:05:59.777 11:33:30 -- common/autotest_common.sh@936 -- # '[' -z 3576939 ']' 00:05:59.777 11:33:30 -- common/autotest_common.sh@940 -- # kill -0 3576939 00:05:59.777 11:33:30 -- common/autotest_common.sh@941 -- # uname 00:05:59.777 11:33:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.777 11:33:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3576939 00:05:59.777 11:33:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.777 11:33:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.777 11:33:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3576939' 00:05:59.777 killing process with pid 3576939 00:05:59.777 11:33:30 -- common/autotest_common.sh@955 -- # kill 3576939 00:05:59.777 11:33:30 -- common/autotest_common.sh@960 -- # wait 3576939 00:06:00.036 00:06:00.036 real 0m1.549s 00:06:00.036 user 0m1.594s 00:06:00.036 sys 0m0.466s 00:06:00.036 11:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.036 11:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.036 ************************************ 00:06:00.036 END TEST dpdk_mem_utility 00:06:00.036 ************************************ 00:06:00.036 11:33:30 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:00.036 11:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.036 11:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.036 11:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.036 ************************************ 00:06:00.036 START TEST event 00:06:00.036 ************************************ 00:06:00.037 11:33:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:00.296 * Looking for test storage... 00:06:00.296 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:00.296 11:33:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.296 11:33:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.296 11:33:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.296 11:33:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.296 11:33:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.296 11:33:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.296 11:33:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.296 11:33:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.296 11:33:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.296 11:33:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.296 11:33:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.296 11:33:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.296 11:33:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.296 11:33:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.296 11:33:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.296 11:33:30 -- scripts/common.sh@344 -- # : 1 00:06:00.296 11:33:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.296 11:33:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.296 11:33:30 -- scripts/common.sh@364 -- # decimal 1 00:06:00.296 11:33:30 -- scripts/common.sh@352 -- # local d=1 00:06:00.296 11:33:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.296 11:33:30 -- scripts/common.sh@354 -- # echo 1 00:06:00.296 11:33:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.296 11:33:30 -- scripts/common.sh@365 -- # decimal 2 00:06:00.296 11:33:30 -- scripts/common.sh@352 -- # local d=2 00:06:00.296 11:33:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.296 11:33:30 -- scripts/common.sh@354 -- # echo 2 00:06:00.296 11:33:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.296 11:33:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.296 11:33:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.296 11:33:30 -- scripts/common.sh@367 -- # return 0 00:06:00.296 11:33:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.296 --rc genhtml_branch_coverage=1 00:06:00.296 --rc genhtml_function_coverage=1 00:06:00.296 --rc genhtml_legend=1 00:06:00.296 --rc geninfo_all_blocks=1 00:06:00.296 --rc geninfo_unexecuted_blocks=1 00:06:00.296 00:06:00.296 ' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.296 --rc genhtml_branch_coverage=1 00:06:00.296 --rc genhtml_function_coverage=1 00:06:00.296 --rc genhtml_legend=1 00:06:00.296 --rc geninfo_all_blocks=1 00:06:00.296 --rc geninfo_unexecuted_blocks=1 00:06:00.296 00:06:00.296 ' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.296 --rc genhtml_branch_coverage=1 00:06:00.296 --rc genhtml_function_coverage=1 00:06:00.296 --rc genhtml_legend=1 00:06:00.296 --rc geninfo_all_blocks=1 00:06:00.296 --rc geninfo_unexecuted_blocks=1 00:06:00.296 00:06:00.296 ' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.296 --rc genhtml_branch_coverage=1 00:06:00.296 --rc genhtml_function_coverage=1 00:06:00.296 --rc genhtml_legend=1 00:06:00.296 --rc geninfo_all_blocks=1 00:06:00.296 --rc geninfo_unexecuted_blocks=1 00:06:00.296 00:06:00.296 ' 00:06:00.296 11:33:30 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.296 11:33:30 -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.296 11:33:30 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.296 11:33:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:00.296 11:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.296 11:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.296 ************************************ 00:06:00.296 START TEST event_perf 00:06:00.296 ************************************ 00:06:00.296 11:33:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.296 Running I/O for 1 seconds...[2024-12-03 11:33:30.807854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.296 [2024-12-03 11:33:30.807935] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577309 ] 00:06:00.296 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.296 [2024-12-03 11:33:30.881223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.556 [2024-12-03 11:33:30.953336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.556 [2024-12-03 11:33:30.953429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.556 [2024-12-03 11:33:30.953518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.556 [2024-12-03 11:33:30.953520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.490 Running I/O for 1 seconds... 00:06:01.490 lcore 0: 208983 00:06:01.490 lcore 1: 208982 00:06:01.490 lcore 2: 208982 00:06:01.490 lcore 3: 208983 00:06:01.490 done. 00:06:01.490 00:06:01.490 real 0m1.251s 00:06:01.490 user 0m4.157s 00:06:01.490 sys 0m0.092s 00:06:01.490 11:33:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.490 11:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.490 ************************************ 00:06:01.490 END TEST event_perf 00:06:01.490 ************************************ 00:06:01.490 11:33:32 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.490 11:33:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:01.490 11:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.490 11:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.490 ************************************ 00:06:01.490 START TEST event_reactor 00:06:01.490 ************************************ 00:06:01.490 11:33:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.748 [2024-12-03 11:33:32.108886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.748 [2024-12-03 11:33:32.108957] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577511 ] 00:06:01.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.748 [2024-12-03 11:33:32.182461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.748 [2024-12-03 11:33:32.246057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.121 test_start 00:06:03.121 oneshot 00:06:03.121 tick 100 00:06:03.121 tick 100 00:06:03.121 tick 250 00:06:03.121 tick 100 00:06:03.121 tick 100 00:06:03.121 tick 100 00:06:03.121 tick 250 00:06:03.121 tick 500 00:06:03.121 tick 100 00:06:03.121 tick 100 00:06:03.121 tick 250 00:06:03.121 tick 100 00:06:03.121 tick 100 00:06:03.121 test_end 00:06:03.121 00:06:03.121 real 0m1.239s 00:06:03.121 user 0m1.151s 00:06:03.121 sys 0m0.082s 00:06:03.121 11:33:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.121 11:33:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.121 ************************************ 00:06:03.121 END TEST event_reactor 00:06:03.121 ************************************ 00:06:03.121 11:33:33 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.121 11:33:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:03.121 11:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.122 11:33:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.122 ************************************ 00:06:03.122 START TEST event_reactor_perf 00:06:03.122 ************************************ 00:06:03.122 11:33:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.122 [2024-12-03 11:33:33.396646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.122 [2024-12-03 11:33:33.396713] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577793 ] 00:06:03.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.122 [2024-12-03 11:33:33.466818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.122 [2024-12-03 11:33:33.529010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.054 test_start 00:06:04.054 test_end 00:06:04.054 Performance: 525570 events per second 00:06:04.054 00:06:04.054 real 0m1.233s 00:06:04.054 user 0m1.144s 00:06:04.054 sys 0m0.084s 00:06:04.054 11:33:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.054 11:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.054 ************************************ 00:06:04.054 END TEST event_reactor_perf 00:06:04.054 ************************************ 00:06:04.054 11:33:34 -- event/event.sh@49 -- # uname -s 00:06:04.054 11:33:34 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.054 11:33:34 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.054 11:33:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.054 11:33:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.054 11:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.054 ************************************ 00:06:04.054 START TEST event_scheduler 00:06:04.054 ************************************ 00:06:04.054 11:33:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.313 * Looking for test storage... 00:06:04.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:04.313 11:33:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.313 11:33:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.313 11:33:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.313 11:33:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.313 11:33:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.313 11:33:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.313 11:33:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.313 11:33:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.313 11:33:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.313 11:33:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.313 11:33:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.313 11:33:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.313 11:33:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.313 11:33:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.313 11:33:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.313 11:33:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.313 11:33:34 -- scripts/common.sh@344 -- # : 1 00:06:04.313 11:33:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.313 11:33:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.313 11:33:34 -- scripts/common.sh@364 -- # decimal 1 00:06:04.313 11:33:34 -- scripts/common.sh@352 -- # local d=1 00:06:04.313 11:33:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.313 11:33:34 -- scripts/common.sh@354 -- # echo 1 00:06:04.313 11:33:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.313 11:33:34 -- scripts/common.sh@365 -- # decimal 2 00:06:04.313 11:33:34 -- scripts/common.sh@352 -- # local d=2 00:06:04.313 11:33:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.313 11:33:34 -- scripts/common.sh@354 -- # echo 2 00:06:04.313 11:33:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.313 11:33:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.313 11:33:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.313 11:33:34 -- scripts/common.sh@367 -- # return 0 00:06:04.313 11:33:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.313 11:33:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.313 --rc genhtml_branch_coverage=1 00:06:04.313 --rc genhtml_function_coverage=1 00:06:04.313 --rc genhtml_legend=1 00:06:04.313 --rc geninfo_all_blocks=1 00:06:04.313 --rc geninfo_unexecuted_blocks=1 00:06:04.313 00:06:04.313 ' 00:06:04.313 11:33:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.313 --rc genhtml_branch_coverage=1 00:06:04.313 --rc genhtml_function_coverage=1 00:06:04.313 --rc genhtml_legend=1 00:06:04.313 --rc geninfo_all_blocks=1 00:06:04.313 --rc geninfo_unexecuted_blocks=1 00:06:04.313 00:06:04.313 ' 00:06:04.313 11:33:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.313 --rc genhtml_branch_coverage=1 00:06:04.313 --rc genhtml_function_coverage=1 00:06:04.313 --rc genhtml_legend=1 00:06:04.313 --rc geninfo_all_blocks=1 00:06:04.313 --rc geninfo_unexecuted_blocks=1 00:06:04.313 00:06:04.313 ' 00:06:04.313 11:33:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.313 --rc genhtml_branch_coverage=1 00:06:04.313 --rc genhtml_function_coverage=1 00:06:04.313 --rc genhtml_legend=1 00:06:04.313 --rc geninfo_all_blocks=1 00:06:04.313 --rc geninfo_unexecuted_blocks=1 00:06:04.313 00:06:04.313 ' 00:06:04.313 11:33:34 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.313 11:33:34 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3578115 00:06:04.313 11:33:34 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.313 11:33:34 -- scheduler/scheduler.sh@37 -- # waitforlisten 3578115 00:06:04.313 11:33:34 -- common/autotest_common.sh@829 -- # '[' -z 3578115 ']' 00:06:04.313 11:33:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.313 11:33:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.313 11:33:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.313 11:33:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.313 11:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.313 11:33:34 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.313 [2024-12-03 11:33:34.888716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.313 [2024-12-03 11:33:34.888774] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578115 ] 00:06:04.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.579 [2024-12-03 11:33:34.953934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.579 [2024-12-03 11:33:35.029068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.579 [2024-12-03 11:33:35.029152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.579 [2024-12-03 11:33:35.029177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.579 [2024-12-03 11:33:35.029180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.142 11:33:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.142 11:33:35 -- common/autotest_common.sh@862 -- # return 0 00:06:05.142 11:33:35 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.142 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.142 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.142 POWER: Env isn't set yet! 00:06:05.142 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:05.142 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.142 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.142 POWER: Attempting to initialise PSTAT power management... 00:06:05.142 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:05.142 POWER: Initialized successfully for lcore 0 power management 00:06:05.400 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:05.400 POWER: Initialized successfully for lcore 1 power management 00:06:05.400 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:05.400 POWER: Initialized successfully for lcore 2 power management 00:06:05.400 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:05.400 POWER: Initialized successfully for lcore 3 power management 00:06:05.400 [2024-12-03 11:33:35.777265] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.400 [2024-12-03 11:33:35.777280] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.400 [2024-12-03 11:33:35.777289] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 [2024-12-03 11:33:35.845716] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.400 11:33:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.400 11:33:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 ************************************ 00:06:05.400 START TEST scheduler_create_thread 00:06:05.400 ************************************ 00:06:05.400 11:33:35 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 2 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 3 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 4 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 5 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 6 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 7 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 8 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 9 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.400 10 00:06:05.400 11:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.400 11:33:35 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.400 11:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.400 11:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.964 11:33:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.964 11:33:36 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.964 11:33:36 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.964 11:33:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.964 11:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.898 11:33:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.898 11:33:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.898 11:33:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.898 11:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.831 11:33:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.831 11:33:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.832 11:33:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.832 11:33:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.832 11:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 11:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.766 00:06:08.766 real 0m3.231s 00:06:08.766 user 0m0.020s 00:06:08.766 sys 0m0.011s 00:06:08.766 11:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.766 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.766 ************************************ 00:06:08.766 END TEST scheduler_create_thread 00:06:08.766 ************************************ 00:06:08.766 11:33:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.766 11:33:39 -- scheduler/scheduler.sh@46 -- # killprocess 3578115 00:06:08.766 11:33:39 -- common/autotest_common.sh@936 -- # '[' -z 3578115 ']' 00:06:08.766 11:33:39 -- common/autotest_common.sh@940 -- # kill -0 3578115 00:06:08.766 11:33:39 -- common/autotest_common.sh@941 -- # uname 00:06:08.766 11:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.766 11:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3578115 00:06:08.766 11:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:08.766 11:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:08.766 11:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3578115' 00:06:08.766 killing process with pid 3578115 00:06:08.766 11:33:39 -- common/autotest_common.sh@955 -- # kill 3578115 00:06:08.766 11:33:39 -- common/autotest_common.sh@960 -- # wait 3578115 00:06:09.025 [2024-12-03 11:33:39.466782] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:09.284 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:09.284 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:09.284 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:09.284 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:09.284 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:09.284 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:09.284 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:09.284 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:09.284 00:06:09.284 real 0m5.072s 00:06:09.284 user 0m10.309s 00:06:09.284 sys 0m0.433s 00:06:09.284 11:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.284 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.284 ************************************ 00:06:09.284 END TEST event_scheduler 00:06:09.284 ************************************ 00:06:09.284 11:33:39 -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.284 11:33:39 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.284 11:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.284 11:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.284 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.284 ************************************ 00:06:09.284 START TEST app_repeat 00:06:09.284 ************************************ 00:06:09.284 11:33:39 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:09.284 11:33:39 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.284 11:33:39 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.284 11:33:39 -- event/event.sh@13 -- # local nbd_list 00:06:09.284 11:33:39 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.284 11:33:39 -- event/event.sh@14 -- # local bdev_list 00:06:09.284 11:33:39 -- event/event.sh@15 -- # local repeat_times=4 00:06:09.284 11:33:39 -- event/event.sh@17 -- # modprobe nbd 00:06:09.284 11:33:39 -- event/event.sh@19 -- # repeat_pid=3578978 00:06:09.284 11:33:39 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.284 11:33:39 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.284 11:33:39 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3578978' 00:06:09.284 Process app_repeat pid: 3578978 00:06:09.284 11:33:39 -- event/event.sh@23 -- # for i in {0..2} 00:06:09.284 11:33:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.284 spdk_app_start Round 0 00:06:09.284 11:33:39 -- event/event.sh@25 -- # waitforlisten 3578978 /var/tmp/spdk-nbd.sock 00:06:09.284 11:33:39 -- common/autotest_common.sh@829 -- # '[' -z 3578978 ']' 00:06:09.284 11:33:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.284 11:33:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.284 11:33:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.284 11:33:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.284 11:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.284 [2024-12-03 11:33:39.829245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.284 [2024-12-03 11:33:39.829317] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578978 ] 00:06:09.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.543 [2024-12-03 11:33:39.901279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.543 [2024-12-03 11:33:39.967413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.543 [2024-12-03 11:33:39.967415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.108 11:33:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.108 11:33:40 -- common/autotest_common.sh@862 -- # return 0 00:06:10.108 11:33:40 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.366 Malloc0 00:06:10.367 11:33:40 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.625 Malloc1 00:06:10.625 11:33:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.625 /dev/nbd0 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.625 11:33:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.625 11:33:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.625 11:33:41 -- common/autotest_common.sh@867 -- # local i 00:06:10.625 11:33:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.625 11:33:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.625 11:33:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.625 11:33:41 -- common/autotest_common.sh@871 -- # break 00:06:10.625 11:33:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.625 11:33:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.625 11:33:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.625 1+0 records in 00:06:10.625 1+0 records out 00:06:10.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259595 s, 15.8 MB/s 00:06:10.625 11:33:41 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.625 11:33:41 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.625 11:33:41 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.883 11:33:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.883 11:33:41 -- common/autotest_common.sh@887 -- # return 0 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.883 /dev/nbd1 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.883 11:33:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.883 11:33:41 -- common/autotest_common.sh@867 -- # local i 00:06:10.883 11:33:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.883 11:33:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.883 11:33:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.883 11:33:41 -- common/autotest_common.sh@871 -- # break 00:06:10.883 11:33:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.883 11:33:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.883 11:33:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.883 1+0 records in 00:06:10.883 1+0 records out 00:06:10.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228818 s, 17.9 MB/s 00:06:10.883 11:33:41 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.883 11:33:41 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.883 11:33:41 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.883 11:33:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.883 11:33:41 -- common/autotest_common.sh@887 -- # return 0 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.883 11:33:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.141 { 00:06:11.141 "nbd_device": "/dev/nbd0", 00:06:11.141 "bdev_name": "Malloc0" 00:06:11.141 }, 00:06:11.141 { 00:06:11.141 "nbd_device": "/dev/nbd1", 00:06:11.141 "bdev_name": "Malloc1" 00:06:11.141 } 00:06:11.141 ]' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.141 { 00:06:11.141 "nbd_device": "/dev/nbd0", 00:06:11.141 "bdev_name": "Malloc0" 00:06:11.141 }, 00:06:11.141 { 00:06:11.141 "nbd_device": "/dev/nbd1", 00:06:11.141 "bdev_name": "Malloc1" 00:06:11.141 } 00:06:11.141 ]' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.141 /dev/nbd1' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.141 /dev/nbd1' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.141 256+0 records in 00:06:11.141 256+0 records out 00:06:11.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456516 s, 230 MB/s 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.141 256+0 records in 00:06:11.141 256+0 records out 00:06:11.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017769 s, 59.0 MB/s 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.141 256+0 records in 00:06:11.141 256+0 records out 00:06:11.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193027 s, 54.3 MB/s 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.141 11:33:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.399 11:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@41 -- # break 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.400 11:33:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@41 -- # break 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.658 11:33:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@65 -- # true 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.915 11:33:42 -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.915 11:33:42 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.174 11:33:42 -- event/event.sh@35 -- # sleep 3 00:06:12.432 [2024-12-03 11:33:42.796558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.432 [2024-12-03 11:33:42.857692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.432 [2024-12-03 11:33:42.857694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.432 [2024-12-03 11:33:42.898581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.432 [2024-12-03 11:33:42.898628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.709 11:33:45 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.709 11:33:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.709 spdk_app_start Round 1 00:06:15.709 11:33:45 -- event/event.sh@25 -- # waitforlisten 3578978 /var/tmp/spdk-nbd.sock 00:06:15.709 11:33:45 -- common/autotest_common.sh@829 -- # '[' -z 3578978 ']' 00:06:15.709 11:33:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.709 11:33:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.709 11:33:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.709 11:33:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.709 11:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.709 11:33:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.709 11:33:45 -- common/autotest_common.sh@862 -- # return 0 00:06:15.709 11:33:45 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.709 Malloc0 00:06:15.709 11:33:45 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.709 Malloc1 00:06:15.709 11:33:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@12 -- # local i 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.709 11:33:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.709 /dev/nbd0 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.967 11:33:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:15.967 11:33:46 -- common/autotest_common.sh@867 -- # local i 00:06:15.967 11:33:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:15.967 11:33:46 -- common/autotest_common.sh@871 -- # break 00:06:15.967 11:33:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.967 1+0 records in 00:06:15.967 1+0 records out 00:06:15.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247154 s, 16.6 MB/s 00:06:15.967 11:33:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.967 11:33:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:15.967 11:33:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.967 11:33:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.967 11:33:46 -- common/autotest_common.sh@887 -- # return 0 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.967 /dev/nbd1 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.967 11:33:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:15.967 11:33:46 -- common/autotest_common.sh@867 -- # local i 00:06:15.967 11:33:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:15.967 11:33:46 -- common/autotest_common.sh@871 -- # break 00:06:15.967 11:33:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.967 11:33:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.967 1+0 records in 00:06:15.967 1+0 records out 00:06:15.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246745 s, 16.6 MB/s 00:06:15.967 11:33:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.967 11:33:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:15.967 11:33:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.967 11:33:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.967 11:33:46 -- common/autotest_common.sh@887 -- # return 0 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.967 11:33:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.225 { 00:06:16.225 "nbd_device": "/dev/nbd0", 00:06:16.225 "bdev_name": "Malloc0" 00:06:16.225 }, 00:06:16.225 { 00:06:16.225 "nbd_device": "/dev/nbd1", 00:06:16.225 "bdev_name": "Malloc1" 00:06:16.225 } 00:06:16.225 ]' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.225 { 00:06:16.225 "nbd_device": "/dev/nbd0", 00:06:16.225 "bdev_name": "Malloc0" 00:06:16.225 }, 00:06:16.225 { 00:06:16.225 "nbd_device": "/dev/nbd1", 00:06:16.225 "bdev_name": "Malloc1" 00:06:16.225 } 00:06:16.225 ]' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.225 /dev/nbd1' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.225 /dev/nbd1' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.225 256+0 records in 00:06:16.225 256+0 records out 00:06:16.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104942 s, 99.9 MB/s 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.225 256+0 records in 00:06:16.225 256+0 records out 00:06:16.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183024 s, 57.3 MB/s 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.225 11:33:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.483 256+0 records in 00:06:16.483 256+0 records out 00:06:16.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200954 s, 52.2 MB/s 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@51 -- # local i 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.483 11:33:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@41 -- # break 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.483 11:33:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@41 -- # break 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.741 11:33:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@65 -- # true 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.998 11:33:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.998 11:33:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.256 11:33:47 -- event/event.sh@35 -- # sleep 3 00:06:17.514 [2024-12-03 11:33:47.894162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.514 [2024-12-03 11:33:47.956023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.514 [2024-12-03 11:33:47.956025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.514 [2024-12-03 11:33:47.997247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.514 [2024-12-03 11:33:47.997294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.793 11:33:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:20.793 11:33:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.793 spdk_app_start Round 2 00:06:20.793 11:33:50 -- event/event.sh@25 -- # waitforlisten 3578978 /var/tmp/spdk-nbd.sock 00:06:20.793 11:33:50 -- common/autotest_common.sh@829 -- # '[' -z 3578978 ']' 00:06:20.793 11:33:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.793 11:33:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.793 11:33:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.793 11:33:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.793 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:20.793 11:33:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.793 11:33:50 -- common/autotest_common.sh@862 -- # return 0 00:06:20.793 11:33:50 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.793 Malloc0 00:06:20.793 11:33:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.793 Malloc1 00:06:20.794 11:33:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.794 11:33:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.052 /dev/nbd0 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.052 11:33:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:21.052 11:33:51 -- common/autotest_common.sh@867 -- # local i 00:06:21.052 11:33:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:21.052 11:33:51 -- common/autotest_common.sh@871 -- # break 00:06:21.052 11:33:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.052 1+0 records in 00:06:21.052 1+0 records out 00:06:21.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020927 s, 19.6 MB/s 00:06:21.052 11:33:51 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.052 11:33:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:21.052 11:33:51 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.052 11:33:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.052 11:33:51 -- common/autotest_common.sh@887 -- # return 0 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.052 /dev/nbd1 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.052 11:33:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.052 11:33:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:21.052 11:33:51 -- common/autotest_common.sh@867 -- # local i 00:06:21.052 11:33:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.052 11:33:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:21.310 11:33:51 -- common/autotest_common.sh@871 -- # break 00:06:21.310 11:33:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.310 11:33:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.310 11:33:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.310 1+0 records in 00:06:21.310 1+0 records out 00:06:21.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267732 s, 15.3 MB/s 00:06:21.310 11:33:51 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.310 11:33:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:21.310 11:33:51 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.310 11:33:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.310 11:33:51 -- common/autotest_common.sh@887 -- # return 0 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.310 { 00:06:21.310 "nbd_device": "/dev/nbd0", 00:06:21.310 "bdev_name": "Malloc0" 00:06:21.310 }, 00:06:21.310 { 00:06:21.310 "nbd_device": "/dev/nbd1", 00:06:21.310 "bdev_name": "Malloc1" 00:06:21.310 } 00:06:21.310 ]' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.310 { 00:06:21.310 "nbd_device": "/dev/nbd0", 00:06:21.310 "bdev_name": "Malloc0" 00:06:21.310 }, 00:06:21.310 { 00:06:21.310 "nbd_device": "/dev/nbd1", 00:06:21.310 "bdev_name": "Malloc1" 00:06:21.310 } 00:06:21.310 ]' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.310 /dev/nbd1' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.310 /dev/nbd1' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.310 256+0 records in 00:06:21.310 256+0 records out 00:06:21.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116142 s, 90.3 MB/s 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.310 11:33:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.568 256+0 records in 00:06:21.568 256+0 records out 00:06:21.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192407 s, 54.5 MB/s 00:06:21.568 11:33:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.568 11:33:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.569 256+0 records in 00:06:21.569 256+0 records out 00:06:21.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204497 s, 51.3 MB/s 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@51 -- # local i 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.569 11:33:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.569 11:33:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@41 -- # break 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@41 -- # break 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.827 11:33:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.084 11:33:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@65 -- # true 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.085 11:33:52 -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.085 11:33:52 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.343 11:33:52 -- event/event.sh@35 -- # sleep 3 00:06:22.602 [2024-12-03 11:33:53.018930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.602 [2024-12-03 11:33:53.080511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.602 [2024-12-03 11:33:53.080513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.602 [2024-12-03 11:33:53.121943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.602 [2024-12-03 11:33:53.121990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.987 11:33:55 -- event/event.sh@38 -- # waitforlisten 3578978 /var/tmp/spdk-nbd.sock 00:06:25.987 11:33:55 -- common/autotest_common.sh@829 -- # '[' -z 3578978 ']' 00:06:25.987 11:33:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.987 11:33:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.987 11:33:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.987 11:33:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.987 11:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.987 11:33:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.987 11:33:55 -- common/autotest_common.sh@862 -- # return 0 00:06:25.987 11:33:55 -- event/event.sh@39 -- # killprocess 3578978 00:06:25.987 11:33:55 -- common/autotest_common.sh@936 -- # '[' -z 3578978 ']' 00:06:25.987 11:33:55 -- common/autotest_common.sh@940 -- # kill -0 3578978 00:06:25.987 11:33:55 -- common/autotest_common.sh@941 -- # uname 00:06:25.987 11:33:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.987 11:33:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3578978 00:06:25.987 11:33:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.987 11:33:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.987 11:33:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3578978' 00:06:25.987 killing process with pid 3578978 00:06:25.987 11:33:56 -- common/autotest_common.sh@955 -- # kill 3578978 00:06:25.987 11:33:56 -- common/autotest_common.sh@960 -- # wait 3578978 00:06:25.987 spdk_app_start is called in Round 0. 00:06:25.987 Shutdown signal received, stop current app iteration 00:06:25.987 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:25.987 spdk_app_start is called in Round 1. 00:06:25.987 Shutdown signal received, stop current app iteration 00:06:25.987 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:25.987 spdk_app_start is called in Round 2. 00:06:25.987 Shutdown signal received, stop current app iteration 00:06:25.987 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:25.987 spdk_app_start is called in Round 3. 00:06:25.987 Shutdown signal received, stop current app iteration 00:06:25.987 11:33:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.987 11:33:56 -- event/event.sh@42 -- # return 0 00:06:25.987 00:06:25.987 real 0m16.448s 00:06:25.987 user 0m35.096s 00:06:25.987 sys 0m2.961s 00:06:25.987 11:33:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.987 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.987 ************************************ 00:06:25.987 END TEST app_repeat 00:06:25.987 ************************************ 00:06:25.987 11:33:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.987 11:33:56 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.987 11:33:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.987 11:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.987 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.987 ************************************ 00:06:25.987 START TEST cpu_locks 00:06:25.987 ************************************ 00:06:25.987 11:33:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.987 * Looking for test storage... 00:06:25.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:25.987 11:33:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:25.987 11:33:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:25.987 11:33:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:25.987 11:33:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:25.987 11:33:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:25.987 11:33:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:25.987 11:33:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:25.987 11:33:56 -- scripts/common.sh@335 -- # IFS=.-: 00:06:25.987 11:33:56 -- scripts/common.sh@335 -- # read -ra ver1 00:06:25.987 11:33:56 -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.987 11:33:56 -- scripts/common.sh@336 -- # read -ra ver2 00:06:25.987 11:33:56 -- scripts/common.sh@337 -- # local 'op=<' 00:06:25.987 11:33:56 -- scripts/common.sh@339 -- # ver1_l=2 00:06:25.987 11:33:56 -- scripts/common.sh@340 -- # ver2_l=1 00:06:25.987 11:33:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:25.987 11:33:56 -- scripts/common.sh@343 -- # case "$op" in 00:06:25.987 11:33:56 -- scripts/common.sh@344 -- # : 1 00:06:25.987 11:33:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:25.987 11:33:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.987 11:33:56 -- scripts/common.sh@364 -- # decimal 1 00:06:25.987 11:33:56 -- scripts/common.sh@352 -- # local d=1 00:06:25.987 11:33:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.987 11:33:56 -- scripts/common.sh@354 -- # echo 1 00:06:25.987 11:33:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:25.987 11:33:56 -- scripts/common.sh@365 -- # decimal 2 00:06:25.987 11:33:56 -- scripts/common.sh@352 -- # local d=2 00:06:25.987 11:33:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.987 11:33:56 -- scripts/common.sh@354 -- # echo 2 00:06:25.987 11:33:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:25.987 11:33:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:25.987 11:33:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:25.987 11:33:56 -- scripts/common.sh@367 -- # return 0 00:06:25.987 11:33:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.987 11:33:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:25.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.988 --rc genhtml_branch_coverage=1 00:06:25.988 --rc genhtml_function_coverage=1 00:06:25.988 --rc genhtml_legend=1 00:06:25.988 --rc geninfo_all_blocks=1 00:06:25.988 --rc geninfo_unexecuted_blocks=1 00:06:25.988 00:06:25.988 ' 00:06:25.988 11:33:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:25.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.988 --rc genhtml_branch_coverage=1 00:06:25.988 --rc genhtml_function_coverage=1 00:06:25.988 --rc genhtml_legend=1 00:06:25.988 --rc geninfo_all_blocks=1 00:06:25.988 --rc geninfo_unexecuted_blocks=1 00:06:25.988 00:06:25.988 ' 00:06:25.988 11:33:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:25.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.988 --rc genhtml_branch_coverage=1 00:06:25.988 --rc genhtml_function_coverage=1 00:06:25.988 --rc genhtml_legend=1 00:06:25.988 --rc geninfo_all_blocks=1 00:06:25.988 --rc geninfo_unexecuted_blocks=1 00:06:25.988 00:06:25.988 ' 00:06:25.988 11:33:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:25.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.988 --rc genhtml_branch_coverage=1 00:06:25.988 --rc genhtml_function_coverage=1 00:06:25.988 --rc genhtml_legend=1 00:06:25.988 --rc geninfo_all_blocks=1 00:06:25.988 --rc geninfo_unexecuted_blocks=1 00:06:25.988 00:06:25.988 ' 00:06:25.988 11:33:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.988 11:33:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.988 11:33:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.988 11:33:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.988 11:33:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.988 11:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.988 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.988 ************************************ 00:06:25.988 START TEST default_locks 00:06:25.988 ************************************ 00:06:25.988 11:33:56 -- common/autotest_common.sh@1114 -- # default_locks 00:06:25.988 11:33:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3582179 00:06:25.988 11:33:56 -- event/cpu_locks.sh@47 -- # waitforlisten 3582179 00:06:25.988 11:33:56 -- common/autotest_common.sh@829 -- # '[' -z 3582179 ']' 00:06:25.988 11:33:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.988 11:33:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.988 11:33:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.988 11:33:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.988 11:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.988 11:33:56 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.988 [2024-12-03 11:33:56.525141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.988 [2024-12-03 11:33:56.525197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582179 ] 00:06:25.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.988 [2024-12-03 11:33:56.592942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.246 [2024-12-03 11:33:56.664847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.246 [2024-12-03 11:33:56.664961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.813 11:33:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.813 11:33:57 -- common/autotest_common.sh@862 -- # return 0 00:06:26.813 11:33:57 -- event/cpu_locks.sh@49 -- # locks_exist 3582179 00:06:26.813 11:33:57 -- event/cpu_locks.sh@22 -- # lslocks -p 3582179 00:06:26.813 11:33:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.071 lslocks: write error 00:06:27.071 11:33:57 -- event/cpu_locks.sh@50 -- # killprocess 3582179 00:06:27.071 11:33:57 -- common/autotest_common.sh@936 -- # '[' -z 3582179 ']' 00:06:27.071 11:33:57 -- common/autotest_common.sh@940 -- # kill -0 3582179 00:06:27.071 11:33:57 -- common/autotest_common.sh@941 -- # uname 00:06:27.071 11:33:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.071 11:33:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3582179 00:06:27.071 11:33:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.071 11:33:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.071 11:33:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3582179' 00:06:27.071 killing process with pid 3582179 00:06:27.071 11:33:57 -- common/autotest_common.sh@955 -- # kill 3582179 00:06:27.071 11:33:57 -- common/autotest_common.sh@960 -- # wait 3582179 00:06:27.668 11:33:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3582179 00:06:27.668 11:33:58 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.668 11:33:58 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3582179 00:06:27.668 11:33:58 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.669 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.669 11:33:58 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.669 11:33:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.669 11:33:58 -- common/autotest_common.sh@653 -- # waitforlisten 3582179 00:06:27.669 11:33:58 -- common/autotest_common.sh@829 -- # '[' -z 3582179 ']' 00:06:27.669 11:33:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.669 11:33:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.669 11:33:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.669 11:33:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.669 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.669 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3582179) - No such process 00:06:27.669 ERROR: process (pid: 3582179) is no longer running 00:06:27.669 11:33:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.669 11:33:58 -- common/autotest_common.sh@862 -- # return 1 00:06:27.669 11:33:58 -- common/autotest_common.sh@653 -- # es=1 00:06:27.669 11:33:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.669 11:33:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.669 11:33:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.669 11:33:58 -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.669 11:33:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.669 11:33:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.669 11:33:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.669 00:06:27.669 real 0m1.539s 00:06:27.669 user 0m1.603s 00:06:27.669 sys 0m0.519s 00:06:27.669 11:33:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.669 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.669 ************************************ 00:06:27.669 END TEST default_locks 00:06:27.669 ************************************ 00:06:27.669 11:33:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.669 11:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.669 11:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.669 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.669 ************************************ 00:06:27.669 START TEST default_locks_via_rpc 00:06:27.669 ************************************ 00:06:27.669 11:33:58 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:27.669 11:33:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3582484 00:06:27.669 11:33:58 -- event/cpu_locks.sh@63 -- # waitforlisten 3582484 00:06:27.669 11:33:58 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.669 11:33:58 -- common/autotest_common.sh@829 -- # '[' -z 3582484 ']' 00:06:27.669 11:33:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.669 11:33:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.669 11:33:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.669 11:33:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.669 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.669 [2024-12-03 11:33:58.112806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.669 [2024-12-03 11:33:58.112862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582484 ] 00:06:27.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.669 [2024-12-03 11:33:58.181193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.669 [2024-12-03 11:33:58.254010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.669 [2024-12-03 11:33:58.254135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.600 11:33:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.600 11:33:58 -- common/autotest_common.sh@862 -- # return 0 00:06:28.600 11:33:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.600 11:33:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.600 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.600 11:33:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.600 11:33:58 -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.600 11:33:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.600 11:33:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.600 11:33:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.600 11:33:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.600 11:33:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.600 11:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.600 11:33:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.600 11:33:58 -- event/cpu_locks.sh@71 -- # locks_exist 3582484 00:06:28.600 11:33:58 -- event/cpu_locks.sh@22 -- # lslocks -p 3582484 00:06:28.600 11:33:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.857 11:33:59 -- event/cpu_locks.sh@73 -- # killprocess 3582484 00:06:28.857 11:33:59 -- common/autotest_common.sh@936 -- # '[' -z 3582484 ']' 00:06:28.857 11:33:59 -- common/autotest_common.sh@940 -- # kill -0 3582484 00:06:28.857 11:33:59 -- common/autotest_common.sh@941 -- # uname 00:06:28.857 11:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.857 11:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3582484 00:06:29.115 11:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.115 11:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.115 11:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3582484' 00:06:29.115 killing process with pid 3582484 00:06:29.115 11:33:59 -- common/autotest_common.sh@955 -- # kill 3582484 00:06:29.115 11:33:59 -- common/autotest_common.sh@960 -- # wait 3582484 00:06:29.372 00:06:29.372 real 0m1.760s 00:06:29.372 user 0m1.846s 00:06:29.372 sys 0m0.603s 00:06:29.372 11:33:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.372 11:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.372 ************************************ 00:06:29.372 END TEST default_locks_via_rpc 00:06:29.372 ************************************ 00:06:29.372 11:33:59 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.372 11:33:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.372 11:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.372 11:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.372 ************************************ 00:06:29.372 START TEST non_locking_app_on_locked_coremask 00:06:29.372 ************************************ 00:06:29.372 11:33:59 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:29.372 11:33:59 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3582791 00:06:29.372 11:33:59 -- event/cpu_locks.sh@81 -- # waitforlisten 3582791 /var/tmp/spdk.sock 00:06:29.372 11:33:59 -- common/autotest_common.sh@829 -- # '[' -z 3582791 ']' 00:06:29.372 11:33:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.372 11:33:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.372 11:33:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.372 11:33:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.372 11:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.372 11:33:59 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.372 [2024-12-03 11:33:59.914708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.372 [2024-12-03 11:33:59.914766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582791 ] 00:06:29.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.372 [2024-12-03 11:33:59.983818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.629 [2024-12-03 11:34:00.070097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.629 [2024-12-03 11:34:00.070219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.190 11:34:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.190 11:34:00 -- common/autotest_common.sh@862 -- # return 0 00:06:30.190 11:34:00 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3583053 00:06:30.190 11:34:00 -- event/cpu_locks.sh@85 -- # waitforlisten 3583053 /var/tmp/spdk2.sock 00:06:30.190 11:34:00 -- common/autotest_common.sh@829 -- # '[' -z 3583053 ']' 00:06:30.190 11:34:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.190 11:34:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.190 11:34:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.190 11:34:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.190 11:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.190 11:34:00 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.190 [2024-12-03 11:34:00.774032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.190 [2024-12-03 11:34:00.774086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583053 ] 00:06:30.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.446 [2024-12-03 11:34:00.869019] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.446 [2024-12-03 11:34:00.869043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.446 [2024-12-03 11:34:01.009311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.446 [2024-12-03 11:34:01.009432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.010 11:34:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.010 11:34:01 -- common/autotest_common.sh@862 -- # return 0 00:06:31.010 11:34:01 -- event/cpu_locks.sh@87 -- # locks_exist 3582791 00:06:31.010 11:34:01 -- event/cpu_locks.sh@22 -- # lslocks -p 3582791 00:06:31.010 11:34:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.385 lslocks: write error 00:06:32.385 11:34:02 -- event/cpu_locks.sh@89 -- # killprocess 3582791 00:06:32.385 11:34:02 -- common/autotest_common.sh@936 -- # '[' -z 3582791 ']' 00:06:32.385 11:34:02 -- common/autotest_common.sh@940 -- # kill -0 3582791 00:06:32.385 11:34:02 -- common/autotest_common.sh@941 -- # uname 00:06:32.385 11:34:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.385 11:34:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3582791 00:06:32.385 11:34:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.385 11:34:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.385 11:34:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3582791' 00:06:32.385 killing process with pid 3582791 00:06:32.385 11:34:02 -- common/autotest_common.sh@955 -- # kill 3582791 00:06:32.385 11:34:02 -- common/autotest_common.sh@960 -- # wait 3582791 00:06:33.318 11:34:03 -- event/cpu_locks.sh@90 -- # killprocess 3583053 00:06:33.318 11:34:03 -- common/autotest_common.sh@936 -- # '[' -z 3583053 ']' 00:06:33.318 11:34:03 -- common/autotest_common.sh@940 -- # kill -0 3583053 00:06:33.318 11:34:03 -- common/autotest_common.sh@941 -- # uname 00:06:33.318 11:34:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.318 11:34:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3583053 00:06:33.318 11:34:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.318 11:34:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.318 11:34:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3583053' 00:06:33.318 killing process with pid 3583053 00:06:33.318 11:34:03 -- common/autotest_common.sh@955 -- # kill 3583053 00:06:33.318 11:34:03 -- common/autotest_common.sh@960 -- # wait 3583053 00:06:33.576 00:06:33.576 real 0m4.172s 00:06:33.576 user 0m4.511s 00:06:33.576 sys 0m1.351s 00:06:33.576 11:34:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.576 11:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.576 ************************************ 00:06:33.576 END TEST non_locking_app_on_locked_coremask 00:06:33.576 ************************************ 00:06:33.576 11:34:04 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.576 11:34:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.576 11:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.576 11:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.576 ************************************ 00:06:33.576 START TEST locking_app_on_unlocked_coremask 00:06:33.576 ************************************ 00:06:33.576 11:34:04 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:33.576 11:34:04 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3583626 00:06:33.576 11:34:04 -- event/cpu_locks.sh@99 -- # waitforlisten 3583626 /var/tmp/spdk.sock 00:06:33.576 11:34:04 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.576 11:34:04 -- common/autotest_common.sh@829 -- # '[' -z 3583626 ']' 00:06:33.576 11:34:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.576 11:34:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.576 11:34:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.576 11:34:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.576 11:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.576 [2024-12-03 11:34:04.143416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.576 [2024-12-03 11:34:04.143468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583626 ] 00:06:33.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.833 [2024-12-03 11:34:04.211497] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.833 [2024-12-03 11:34:04.211530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.833 [2024-12-03 11:34:04.273751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.833 [2024-12-03 11:34:04.273897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.398 11:34:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.398 11:34:04 -- common/autotest_common.sh@862 -- # return 0 00:06:34.398 11:34:04 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3583840 00:06:34.398 11:34:04 -- event/cpu_locks.sh@103 -- # waitforlisten 3583840 /var/tmp/spdk2.sock 00:06:34.398 11:34:04 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.398 11:34:04 -- common/autotest_common.sh@829 -- # '[' -z 3583840 ']' 00:06:34.398 11:34:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.398 11:34:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.398 11:34:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.398 11:34:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.398 11:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.398 [2024-12-03 11:34:05.002435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.398 [2024-12-03 11:34:05.002488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583840 ] 00:06:34.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.654 [2024-12-03 11:34:05.097001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.654 [2024-12-03 11:34:05.232828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.654 [2024-12-03 11:34:05.232957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.217 11:34:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.217 11:34:05 -- common/autotest_common.sh@862 -- # return 0 00:06:35.217 11:34:05 -- event/cpu_locks.sh@105 -- # locks_exist 3583840 00:06:35.217 11:34:05 -- event/cpu_locks.sh@22 -- # lslocks -p 3583840 00:06:35.217 11:34:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.588 lslocks: write error 00:06:36.588 11:34:06 -- event/cpu_locks.sh@107 -- # killprocess 3583626 00:06:36.588 11:34:06 -- common/autotest_common.sh@936 -- # '[' -z 3583626 ']' 00:06:36.588 11:34:06 -- common/autotest_common.sh@940 -- # kill -0 3583626 00:06:36.588 11:34:06 -- common/autotest_common.sh@941 -- # uname 00:06:36.588 11:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.588 11:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3583626 00:06:36.588 11:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.588 11:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.588 11:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3583626' 00:06:36.588 killing process with pid 3583626 00:06:36.588 11:34:06 -- common/autotest_common.sh@955 -- # kill 3583626 00:06:36.588 11:34:06 -- common/autotest_common.sh@960 -- # wait 3583626 00:06:37.155 11:34:07 -- event/cpu_locks.sh@108 -- # killprocess 3583840 00:06:37.155 11:34:07 -- common/autotest_common.sh@936 -- # '[' -z 3583840 ']' 00:06:37.155 11:34:07 -- common/autotest_common.sh@940 -- # kill -0 3583840 00:06:37.155 11:34:07 -- common/autotest_common.sh@941 -- # uname 00:06:37.155 11:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.155 11:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3583840 00:06:37.155 11:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.155 11:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.155 11:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3583840' 00:06:37.155 killing process with pid 3583840 00:06:37.155 11:34:07 -- common/autotest_common.sh@955 -- # kill 3583840 00:06:37.155 11:34:07 -- common/autotest_common.sh@960 -- # wait 3583840 00:06:37.413 00:06:37.413 real 0m3.906s 00:06:37.413 user 0m4.228s 00:06:37.413 sys 0m1.239s 00:06:37.413 11:34:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.413 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.413 ************************************ 00:06:37.413 END TEST locking_app_on_unlocked_coremask 00:06:37.413 ************************************ 00:06:37.672 11:34:08 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.672 11:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.672 11:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.672 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.672 ************************************ 00:06:37.672 START TEST locking_app_on_locked_coremask 00:06:37.672 ************************************ 00:06:37.672 11:34:08 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:37.672 11:34:08 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3584407 00:06:37.672 11:34:08 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.672 11:34:08 -- event/cpu_locks.sh@116 -- # waitforlisten 3584407 /var/tmp/spdk.sock 00:06:37.672 11:34:08 -- common/autotest_common.sh@829 -- # '[' -z 3584407 ']' 00:06:37.672 11:34:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.672 11:34:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.672 11:34:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.672 11:34:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.672 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.672 [2024-12-03 11:34:08.087781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.672 [2024-12-03 11:34:08.087832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584407 ] 00:06:37.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.672 [2024-12-03 11:34:08.155776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.672 [2024-12-03 11:34:08.223741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.672 [2024-12-03 11:34:08.223883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.630 11:34:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.630 11:34:08 -- common/autotest_common.sh@862 -- # return 0 00:06:38.630 11:34:08 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3584485 00:06:38.630 11:34:08 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.630 11:34:08 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3584485 /var/tmp/spdk2.sock 00:06:38.630 11:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:38.630 11:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3584485 /var/tmp/spdk2.sock 00:06:38.630 11:34:08 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:38.630 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.630 11:34:08 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:38.630 11:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.630 11:34:08 -- common/autotest_common.sh@653 -- # waitforlisten 3584485 /var/tmp/spdk2.sock 00:06:38.630 11:34:08 -- common/autotest_common.sh@829 -- # '[' -z 3584485 ']' 00:06:38.630 11:34:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.630 11:34:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.630 11:34:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.630 11:34:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.630 11:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.630 [2024-12-03 11:34:08.948203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.630 [2024-12-03 11:34:08.948254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584485 ] 00:06:38.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.630 [2024-12-03 11:34:09.038068] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3584407 has claimed it. 00:06:38.630 [2024-12-03 11:34:09.038117] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3584485) - No such process 00:06:39.195 ERROR: process (pid: 3584485) is no longer running 00:06:39.195 11:34:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.195 11:34:09 -- common/autotest_common.sh@862 -- # return 1 00:06:39.195 11:34:09 -- common/autotest_common.sh@653 -- # es=1 00:06:39.195 11:34:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.195 11:34:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.195 11:34:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.195 11:34:09 -- event/cpu_locks.sh@122 -- # locks_exist 3584407 00:06:39.195 11:34:09 -- event/cpu_locks.sh@22 -- # lslocks -p 3584407 00:06:39.195 11:34:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.763 lslocks: write error 00:06:39.763 11:34:10 -- event/cpu_locks.sh@124 -- # killprocess 3584407 00:06:39.763 11:34:10 -- common/autotest_common.sh@936 -- # '[' -z 3584407 ']' 00:06:39.763 11:34:10 -- common/autotest_common.sh@940 -- # kill -0 3584407 00:06:39.763 11:34:10 -- common/autotest_common.sh@941 -- # uname 00:06:39.763 11:34:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.763 11:34:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3584407 00:06:39.763 11:34:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.763 11:34:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.763 11:34:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3584407' 00:06:39.763 killing process with pid 3584407 00:06:39.763 11:34:10 -- common/autotest_common.sh@955 -- # kill 3584407 00:06:39.763 11:34:10 -- common/autotest_common.sh@960 -- # wait 3584407 00:06:40.021 00:06:40.021 real 0m2.541s 00:06:40.021 user 0m2.793s 00:06:40.021 sys 0m0.792s 00:06:40.021 11:34:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.021 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.021 ************************************ 00:06:40.021 END TEST locking_app_on_locked_coremask 00:06:40.021 ************************************ 00:06:40.021 11:34:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.021 11:34:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.021 11:34:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.021 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.021 ************************************ 00:06:40.021 START TEST locking_overlapped_coremask 00:06:40.021 ************************************ 00:06:40.021 11:34:10 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:40.021 11:34:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3584786 00:06:40.021 11:34:10 -- event/cpu_locks.sh@133 -- # waitforlisten 3584786 /var/tmp/spdk.sock 00:06:40.021 11:34:10 -- common/autotest_common.sh@829 -- # '[' -z 3584786 ']' 00:06:40.021 11:34:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.021 11:34:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.021 11:34:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.022 11:34:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.022 11:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.022 11:34:10 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.280 [2024-12-03 11:34:10.673519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.280 [2024-12-03 11:34:10.673573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584786 ] 00:06:40.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.280 [2024-12-03 11:34:10.742716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.280 [2024-12-03 11:34:10.816056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.280 [2024-12-03 11:34:10.816205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.280 [2024-12-03 11:34:10.816223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.280 [2024-12-03 11:34:10.816226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.213 11:34:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.213 11:34:11 -- common/autotest_common.sh@862 -- # return 0 00:06:41.213 11:34:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3585054 00:06:41.213 11:34:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3585054 /var/tmp/spdk2.sock 00:06:41.213 11:34:11 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.213 11:34:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.213 11:34:11 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3585054 /var/tmp/spdk2.sock 00:06:41.213 11:34:11 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.213 11:34:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.213 11:34:11 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.213 11:34:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.213 11:34:11 -- common/autotest_common.sh@653 -- # waitforlisten 3585054 /var/tmp/spdk2.sock 00:06:41.213 11:34:11 -- common/autotest_common.sh@829 -- # '[' -z 3585054 ']' 00:06:41.213 11:34:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.213 11:34:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.213 11:34:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.213 11:34:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.213 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.213 [2024-12-03 11:34:11.542510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.213 [2024-12-03 11:34:11.542557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585054 ] 00:06:41.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.213 [2024-12-03 11:34:11.639239] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3584786 has claimed it. 00:06:41.213 [2024-12-03 11:34:11.639275] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.779 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3585054) - No such process 00:06:41.779 ERROR: process (pid: 3585054) is no longer running 00:06:41.779 11:34:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.779 11:34:12 -- common/autotest_common.sh@862 -- # return 1 00:06:41.779 11:34:12 -- common/autotest_common.sh@653 -- # es=1 00:06:41.779 11:34:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.779 11:34:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.779 11:34:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.779 11:34:12 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.779 11:34:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.779 11:34:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.779 11:34:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.779 11:34:12 -- event/cpu_locks.sh@141 -- # killprocess 3584786 00:06:41.779 11:34:12 -- common/autotest_common.sh@936 -- # '[' -z 3584786 ']' 00:06:41.779 11:34:12 -- common/autotest_common.sh@940 -- # kill -0 3584786 00:06:41.779 11:34:12 -- common/autotest_common.sh@941 -- # uname 00:06:41.779 11:34:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.779 11:34:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3584786 00:06:41.779 11:34:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.779 11:34:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.779 11:34:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3584786' 00:06:41.779 killing process with pid 3584786 00:06:41.779 11:34:12 -- common/autotest_common.sh@955 -- # kill 3584786 00:06:41.779 11:34:12 -- common/autotest_common.sh@960 -- # wait 3584786 00:06:42.037 00:06:42.037 real 0m1.945s 00:06:42.037 user 0m5.429s 00:06:42.037 sys 0m0.471s 00:06:42.037 11:34:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.037 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.037 ************************************ 00:06:42.037 END TEST locking_overlapped_coremask 00:06:42.037 ************************************ 00:06:42.037 11:34:12 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.037 11:34:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.037 11:34:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.037 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.037 ************************************ 00:06:42.037 START TEST locking_overlapped_coremask_via_rpc 00:06:42.037 ************************************ 00:06:42.037 11:34:12 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:42.037 11:34:12 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3585249 00:06:42.037 11:34:12 -- event/cpu_locks.sh@149 -- # waitforlisten 3585249 /var/tmp/spdk.sock 00:06:42.037 11:34:12 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.038 11:34:12 -- common/autotest_common.sh@829 -- # '[' -z 3585249 ']' 00:06:42.038 11:34:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.038 11:34:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.038 11:34:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.038 11:34:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.038 11:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.295 [2024-12-03 11:34:12.667125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.295 [2024-12-03 11:34:12.667182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585249 ] 00:06:42.295 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.295 [2024-12-03 11:34:12.734467] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.295 [2024-12-03 11:34:12.734494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.295 [2024-12-03 11:34:12.808607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.295 [2024-12-03 11:34:12.808747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.295 [2024-12-03 11:34:12.808839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.295 [2024-12-03 11:34:12.808841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.229 11:34:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.229 11:34:13 -- common/autotest_common.sh@862 -- # return 0 00:06:43.229 11:34:13 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3585365 00:06:43.229 11:34:13 -- event/cpu_locks.sh@153 -- # waitforlisten 3585365 /var/tmp/spdk2.sock 00:06:43.230 11:34:13 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.230 11:34:13 -- common/autotest_common.sh@829 -- # '[' -z 3585365 ']' 00:06:43.230 11:34:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.230 11:34:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.230 11:34:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.230 11:34:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.230 11:34:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.230 [2024-12-03 11:34:13.532549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.230 [2024-12-03 11:34:13.532599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585365 ] 00:06:43.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.230 [2024-12-03 11:34:13.627073] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.230 [2024-12-03 11:34:13.627100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.230 [2024-12-03 11:34:13.768168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.230 [2024-12-03 11:34:13.768350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.230 [2024-12-03 11:34:13.768485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.230 [2024-12-03 11:34:13.768488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.795 11:34:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.795 11:34:14 -- common/autotest_common.sh@862 -- # return 0 00:06:43.795 11:34:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.795 11:34:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.795 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.795 11:34:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.795 11:34:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.795 11:34:14 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.795 11:34:14 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.795 11:34:14 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:43.795 11:34:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.795 11:34:14 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:43.795 11:34:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.795 11:34:14 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.795 11:34:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.795 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.795 [2024-12-03 11:34:14.353179] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3585249 has claimed it. 00:06:43.795 request: 00:06:43.795 { 00:06:43.795 "method": "framework_enable_cpumask_locks", 00:06:43.795 "req_id": 1 00:06:43.795 } 00:06:43.795 Got JSON-RPC error response 00:06:43.795 response: 00:06:43.795 { 00:06:43.795 "code": -32603, 00:06:43.795 "message": "Failed to claim CPU core: 2" 00:06:43.795 } 00:06:43.795 11:34:14 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:43.795 11:34:14 -- common/autotest_common.sh@653 -- # es=1 00:06:43.795 11:34:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.795 11:34:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.795 11:34:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.795 11:34:14 -- event/cpu_locks.sh@158 -- # waitforlisten 3585249 /var/tmp/spdk.sock 00:06:43.795 11:34:14 -- common/autotest_common.sh@829 -- # '[' -z 3585249 ']' 00:06:43.795 11:34:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.795 11:34:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.795 11:34:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.795 11:34:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.795 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 11:34:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.053 11:34:14 -- common/autotest_common.sh@862 -- # return 0 00:06:44.053 11:34:14 -- event/cpu_locks.sh@159 -- # waitforlisten 3585365 /var/tmp/spdk2.sock 00:06:44.053 11:34:14 -- common/autotest_common.sh@829 -- # '[' -z 3585365 ']' 00:06:44.053 11:34:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.053 11:34:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.053 11:34:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.053 11:34:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.053 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.311 11:34:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.311 11:34:14 -- common/autotest_common.sh@862 -- # return 0 00:06:44.311 11:34:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.311 11:34:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.311 11:34:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.311 11:34:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.311 00:06:44.311 real 0m2.130s 00:06:44.311 user 0m0.874s 00:06:44.311 sys 0m0.191s 00:06:44.311 11:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.311 11:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.311 ************************************ 00:06:44.311 END TEST locking_overlapped_coremask_via_rpc 00:06:44.311 ************************************ 00:06:44.311 11:34:14 -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.311 11:34:14 -- event/cpu_locks.sh@15 -- # [[ -z 3585249 ]] 00:06:44.311 11:34:14 -- event/cpu_locks.sh@15 -- # killprocess 3585249 00:06:44.311 11:34:14 -- common/autotest_common.sh@936 -- # '[' -z 3585249 ']' 00:06:44.311 11:34:14 -- common/autotest_common.sh@940 -- # kill -0 3585249 00:06:44.311 11:34:14 -- common/autotest_common.sh@941 -- # uname 00:06:44.311 11:34:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.311 11:34:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3585249 00:06:44.311 11:34:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.311 11:34:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.311 11:34:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3585249' 00:06:44.311 killing process with pid 3585249 00:06:44.311 11:34:14 -- common/autotest_common.sh@955 -- # kill 3585249 00:06:44.311 11:34:14 -- common/autotest_common.sh@960 -- # wait 3585249 00:06:44.877 11:34:15 -- event/cpu_locks.sh@16 -- # [[ -z 3585365 ]] 00:06:44.877 11:34:15 -- event/cpu_locks.sh@16 -- # killprocess 3585365 00:06:44.877 11:34:15 -- common/autotest_common.sh@936 -- # '[' -z 3585365 ']' 00:06:44.877 11:34:15 -- common/autotest_common.sh@940 -- # kill -0 3585365 00:06:44.877 11:34:15 -- common/autotest_common.sh@941 -- # uname 00:06:44.877 11:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.877 11:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3585365 00:06:44.877 11:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:44.877 11:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:44.877 11:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3585365' 00:06:44.877 killing process with pid 3585365 00:06:44.877 11:34:15 -- common/autotest_common.sh@955 -- # kill 3585365 00:06:44.877 11:34:15 -- common/autotest_common.sh@960 -- # wait 3585365 00:06:45.135 11:34:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.135 11:34:15 -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.135 11:34:15 -- event/cpu_locks.sh@15 -- # [[ -z 3585249 ]] 00:06:45.135 11:34:15 -- event/cpu_locks.sh@15 -- # killprocess 3585249 00:06:45.135 11:34:15 -- common/autotest_common.sh@936 -- # '[' -z 3585249 ']' 00:06:45.135 11:34:15 -- common/autotest_common.sh@940 -- # kill -0 3585249 00:06:45.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3585249) - No such process 00:06:45.135 11:34:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3585249 is not found' 00:06:45.135 Process with pid 3585249 is not found 00:06:45.135 11:34:15 -- event/cpu_locks.sh@16 -- # [[ -z 3585365 ]] 00:06:45.135 11:34:15 -- event/cpu_locks.sh@16 -- # killprocess 3585365 00:06:45.135 11:34:15 -- common/autotest_common.sh@936 -- # '[' -z 3585365 ']' 00:06:45.135 11:34:15 -- common/autotest_common.sh@940 -- # kill -0 3585365 00:06:45.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3585365) - No such process 00:06:45.135 11:34:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3585365 is not found' 00:06:45.135 Process with pid 3585365 is not found 00:06:45.135 11:34:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.135 00:06:45.135 real 0m19.320s 00:06:45.135 user 0m32.188s 00:06:45.135 sys 0m6.134s 00:06:45.135 11:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.135 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.135 ************************************ 00:06:45.135 END TEST cpu_locks 00:06:45.135 ************************************ 00:06:45.135 00:06:45.135 real 0m45.067s 00:06:45.135 user 1m24.261s 00:06:45.135 sys 0m10.145s 00:06:45.135 11:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.135 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.135 ************************************ 00:06:45.135 END TEST event 00:06:45.135 ************************************ 00:06:45.135 11:34:15 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:45.135 11:34:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.135 11:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.135 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.135 ************************************ 00:06:45.135 START TEST thread 00:06:45.135 ************************************ 00:06:45.135 11:34:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:45.393 * Looking for test storage... 00:06:45.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:45.393 11:34:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.393 11:34:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.393 11:34:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.393 11:34:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.393 11:34:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.393 11:34:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.393 11:34:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.393 11:34:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.393 11:34:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.393 11:34:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.393 11:34:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.393 11:34:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.393 11:34:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.393 11:34:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.393 11:34:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.393 11:34:15 -- scripts/common.sh@344 -- # : 1 00:06:45.393 11:34:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.393 11:34:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.393 11:34:15 -- scripts/common.sh@364 -- # decimal 1 00:06:45.393 11:34:15 -- scripts/common.sh@352 -- # local d=1 00:06:45.393 11:34:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.393 11:34:15 -- scripts/common.sh@354 -- # echo 1 00:06:45.393 11:34:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.393 11:34:15 -- scripts/common.sh@365 -- # decimal 2 00:06:45.393 11:34:15 -- scripts/common.sh@352 -- # local d=2 00:06:45.393 11:34:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.393 11:34:15 -- scripts/common.sh@354 -- # echo 2 00:06:45.393 11:34:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.393 11:34:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.393 11:34:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.393 11:34:15 -- scripts/common.sh@367 -- # return 0 00:06:45.393 11:34:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.393 --rc genhtml_branch_coverage=1 00:06:45.393 --rc genhtml_function_coverage=1 00:06:45.393 --rc genhtml_legend=1 00:06:45.393 --rc geninfo_all_blocks=1 00:06:45.393 --rc geninfo_unexecuted_blocks=1 00:06:45.393 00:06:45.393 ' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.393 --rc genhtml_branch_coverage=1 00:06:45.393 --rc genhtml_function_coverage=1 00:06:45.393 --rc genhtml_legend=1 00:06:45.393 --rc geninfo_all_blocks=1 00:06:45.393 --rc geninfo_unexecuted_blocks=1 00:06:45.393 00:06:45.393 ' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.393 --rc genhtml_branch_coverage=1 00:06:45.393 --rc genhtml_function_coverage=1 00:06:45.393 --rc genhtml_legend=1 00:06:45.393 --rc geninfo_all_blocks=1 00:06:45.393 --rc geninfo_unexecuted_blocks=1 00:06:45.393 00:06:45.393 ' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.393 --rc genhtml_branch_coverage=1 00:06:45.393 --rc genhtml_function_coverage=1 00:06:45.393 --rc genhtml_legend=1 00:06:45.393 --rc geninfo_all_blocks=1 00:06:45.393 --rc geninfo_unexecuted_blocks=1 00:06:45.393 00:06:45.393 ' 00:06:45.393 11:34:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.393 11:34:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:45.393 11:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.393 11:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.393 ************************************ 00:06:45.393 START TEST thread_poller_perf 00:06:45.393 ************************************ 00:06:45.393 11:34:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.393 [2024-12-03 11:34:15.921104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.393 [2024-12-03 11:34:15.921199] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585996 ] 00:06:45.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.393 [2024-12-03 11:34:15.991369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.652 [2024-12-03 11:34:16.059510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.652 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.586 [2024-12-03T10:34:17.200Z] ====================================== 00:06:46.586 [2024-12-03T10:34:17.200Z] busy:2506064262 (cyc) 00:06:46.586 [2024-12-03T10:34:17.200Z] total_run_count: 409000 00:06:46.586 [2024-12-03T10:34:17.200Z] tsc_hz: 2500000000 (cyc) 00:06:46.586 [2024-12-03T10:34:17.200Z] ====================================== 00:06:46.586 [2024-12-03T10:34:17.200Z] poller_cost: 6127 (cyc), 2450 (nsec) 00:06:46.586 00:06:46.586 real 0m1.252s 00:06:46.586 user 0m1.159s 00:06:46.586 sys 0m0.090s 00:06:46.586 11:34:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.586 11:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.586 ************************************ 00:06:46.586 END TEST thread_poller_perf 00:06:46.586 ************************************ 00:06:46.586 11:34:17 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.586 11:34:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:46.586 11:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.586 11:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.586 ************************************ 00:06:46.586 START TEST thread_poller_perf 00:06:46.586 ************************************ 00:06:46.586 11:34:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.846 [2024-12-03 11:34:17.222248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.846 [2024-12-03 11:34:17.222347] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586191 ] 00:06:46.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.846 [2024-12-03 11:34:17.293552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.846 [2024-12-03 11:34:17.362394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.846 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.220 [2024-12-03T10:34:18.834Z] ====================================== 00:06:48.220 [2024-12-03T10:34:18.834Z] busy:2502563876 (cyc) 00:06:48.220 [2024-12-03T10:34:18.834Z] total_run_count: 5630000 00:06:48.220 [2024-12-03T10:34:18.834Z] tsc_hz: 2500000000 (cyc) 00:06:48.220 [2024-12-03T10:34:18.834Z] ====================================== 00:06:48.220 [2024-12-03T10:34:18.834Z] poller_cost: 444 (cyc), 177 (nsec) 00:06:48.220 00:06:48.220 real 0m1.247s 00:06:48.220 user 0m1.155s 00:06:48.220 sys 0m0.089s 00:06:48.220 11:34:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.220 11:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 ************************************ 00:06:48.220 END TEST thread_poller_perf 00:06:48.220 ************************************ 00:06:48.220 11:34:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.220 00:06:48.220 real 0m2.783s 00:06:48.220 user 0m2.440s 00:06:48.220 sys 0m0.369s 00:06:48.220 11:34:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.220 11:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 ************************************ 00:06:48.220 END TEST thread 00:06:48.220 ************************************ 00:06:48.220 11:34:18 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:48.220 11:34:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.220 11:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.220 11:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 ************************************ 00:06:48.220 START TEST accel 00:06:48.220 ************************************ 00:06:48.220 11:34:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:48.220 * Looking for test storage... 00:06:48.220 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:48.220 11:34:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:48.220 11:34:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:48.220 11:34:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:48.220 11:34:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:48.220 11:34:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:48.220 11:34:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:48.220 11:34:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:48.220 11:34:18 -- scripts/common.sh@335 -- # IFS=.-: 00:06:48.220 11:34:18 -- scripts/common.sh@335 -- # read -ra ver1 00:06:48.220 11:34:18 -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.220 11:34:18 -- scripts/common.sh@336 -- # read -ra ver2 00:06:48.220 11:34:18 -- scripts/common.sh@337 -- # local 'op=<' 00:06:48.220 11:34:18 -- scripts/common.sh@339 -- # ver1_l=2 00:06:48.220 11:34:18 -- scripts/common.sh@340 -- # ver2_l=1 00:06:48.220 11:34:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:48.220 11:34:18 -- scripts/common.sh@343 -- # case "$op" in 00:06:48.220 11:34:18 -- scripts/common.sh@344 -- # : 1 00:06:48.220 11:34:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:48.220 11:34:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.220 11:34:18 -- scripts/common.sh@364 -- # decimal 1 00:06:48.220 11:34:18 -- scripts/common.sh@352 -- # local d=1 00:06:48.220 11:34:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.220 11:34:18 -- scripts/common.sh@354 -- # echo 1 00:06:48.220 11:34:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:48.220 11:34:18 -- scripts/common.sh@365 -- # decimal 2 00:06:48.220 11:34:18 -- scripts/common.sh@352 -- # local d=2 00:06:48.220 11:34:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.220 11:34:18 -- scripts/common.sh@354 -- # echo 2 00:06:48.220 11:34:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:48.220 11:34:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:48.220 11:34:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:48.220 11:34:18 -- scripts/common.sh@367 -- # return 0 00:06:48.220 11:34:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.220 11:34:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:48.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.220 --rc genhtml_branch_coverage=1 00:06:48.221 --rc genhtml_function_coverage=1 00:06:48.221 --rc genhtml_legend=1 00:06:48.221 --rc geninfo_all_blocks=1 00:06:48.221 --rc geninfo_unexecuted_blocks=1 00:06:48.221 00:06:48.221 ' 00:06:48.221 11:34:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:48.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.221 --rc genhtml_branch_coverage=1 00:06:48.221 --rc genhtml_function_coverage=1 00:06:48.221 --rc genhtml_legend=1 00:06:48.221 --rc geninfo_all_blocks=1 00:06:48.221 --rc geninfo_unexecuted_blocks=1 00:06:48.221 00:06:48.221 ' 00:06:48.221 11:34:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:48.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.221 --rc genhtml_branch_coverage=1 00:06:48.221 --rc genhtml_function_coverage=1 00:06:48.221 --rc genhtml_legend=1 00:06:48.221 --rc geninfo_all_blocks=1 00:06:48.221 --rc geninfo_unexecuted_blocks=1 00:06:48.221 00:06:48.221 ' 00:06:48.221 11:34:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:48.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.221 --rc genhtml_branch_coverage=1 00:06:48.221 --rc genhtml_function_coverage=1 00:06:48.221 --rc genhtml_legend=1 00:06:48.221 --rc geninfo_all_blocks=1 00:06:48.221 --rc geninfo_unexecuted_blocks=1 00:06:48.221 00:06:48.221 ' 00:06:48.221 11:34:18 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:48.221 11:34:18 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:48.221 11:34:18 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.221 11:34:18 -- accel/accel.sh@59 -- # spdk_tgt_pid=3586488 00:06:48.221 11:34:18 -- accel/accel.sh@60 -- # waitforlisten 3586488 00:06:48.221 11:34:18 -- common/autotest_common.sh@829 -- # '[' -z 3586488 ']' 00:06:48.221 11:34:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.221 11:34:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.221 11:34:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.221 11:34:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.221 11:34:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.221 11:34:18 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:48.221 11:34:18 -- accel/accel.sh@58 -- # build_accel_config 00:06:48.221 11:34:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.221 11:34:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.221 11:34:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.221 11:34:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.221 11:34:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.221 11:34:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.221 11:34:18 -- accel/accel.sh@42 -- # jq -r . 00:06:48.221 [2024-12-03 11:34:18.742223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.221 [2024-12-03 11:34:18.742284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586488 ] 00:06:48.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.221 [2024-12-03 11:34:18.809850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.479 [2024-12-03 11:34:18.877826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.479 [2024-12-03 11:34:18.877945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.044 11:34:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.044 11:34:19 -- common/autotest_common.sh@862 -- # return 0 00:06:49.044 11:34:19 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:49.044 11:34:19 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:49.044 11:34:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.044 11:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.044 11:34:19 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:49.044 11:34:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # IFS== 00:06:49.044 11:34:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.044 11:34:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.044 11:34:19 -- accel/accel.sh@67 -- # killprocess 3586488 00:06:49.044 11:34:19 -- common/autotest_common.sh@936 -- # '[' -z 3586488 ']' 00:06:49.044 11:34:19 -- common/autotest_common.sh@940 -- # kill -0 3586488 00:06:49.044 11:34:19 -- common/autotest_common.sh@941 -- # uname 00:06:49.044 11:34:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.044 11:34:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3586488 00:06:49.302 11:34:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.302 11:34:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.302 11:34:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3586488' 00:06:49.302 killing process with pid 3586488 00:06:49.302 11:34:19 -- common/autotest_common.sh@955 -- # kill 3586488 00:06:49.302 11:34:19 -- common/autotest_common.sh@960 -- # wait 3586488 00:06:49.560 11:34:19 -- accel/accel.sh@68 -- # trap - ERR 00:06:49.560 11:34:19 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:49.560 11:34:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.560 11:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.560 11:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.560 11:34:19 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:49.560 11:34:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.560 11:34:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.560 11:34:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:49.560 11:34:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.560 11:34:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.560 11:34:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.560 11:34:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.560 11:34:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.560 11:34:19 -- accel/accel.sh@42 -- # jq -r . 00:06:49.560 11:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.560 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.561 11:34:20 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:49.561 11:34:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.561 11:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.561 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.561 ************************************ 00:06:49.561 START TEST accel_missing_filename 00:06:49.561 ************************************ 00:06:49.561 11:34:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:49.561 11:34:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:49.561 11:34:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:49.561 11:34:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:49.561 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.561 11:34:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:49.561 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.561 11:34:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:49.561 11:34:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:49.561 11:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.561 11:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.561 11:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.561 11:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.561 11:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.561 11:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.561 11:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.561 11:34:20 -- accel/accel.sh@42 -- # jq -r . 00:06:49.561 [2024-12-03 11:34:20.094928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.561 [2024-12-03 11:34:20.095003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586694 ] 00:06:49.561 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.561 [2024-12-03 11:34:20.168121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.819 [2024-12-03 11:34:20.238168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.819 [2024-12-03 11:34:20.279913] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.819 [2024-12-03 11:34:20.340938] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:49.819 A filename is required. 00:06:49.819 11:34:20 -- common/autotest_common.sh@653 -- # es=234 00:06:49.819 11:34:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.819 11:34:20 -- common/autotest_common.sh@662 -- # es=106 00:06:49.819 11:34:20 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:49.819 11:34:20 -- common/autotest_common.sh@670 -- # es=1 00:06:49.819 11:34:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.819 00:06:49.819 real 0m0.368s 00:06:49.819 user 0m0.266s 00:06:49.819 sys 0m0.136s 00:06:49.819 11:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.819 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.819 ************************************ 00:06:49.819 END TEST accel_missing_filename 00:06:49.819 ************************************ 00:06:50.077 11:34:20 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:50.077 11:34:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.077 11:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.077 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.077 ************************************ 00:06:50.077 START TEST accel_compress_verify 00:06:50.077 ************************************ 00:06:50.077 11:34:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:50.077 11:34:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.077 11:34:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:50.077 11:34:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.077 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.077 11:34:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.077 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.077 11:34:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:50.077 11:34:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:50.078 11:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.078 11:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.078 11:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.078 11:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.078 11:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.078 11:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.078 11:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.078 11:34:20 -- accel/accel.sh@42 -- # jq -r . 00:06:50.078 [2024-12-03 11:34:20.510620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.078 [2024-12-03 11:34:20.510697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586925 ] 00:06:50.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.078 [2024-12-03 11:34:20.582357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.078 [2024-12-03 11:34:20.651010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.336 [2024-12-03 11:34:20.692355] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.336 [2024-12-03 11:34:20.752127] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:50.336 00:06:50.336 Compression does not support the verify option, aborting. 00:06:50.336 11:34:20 -- common/autotest_common.sh@653 -- # es=161 00:06:50.336 11:34:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.336 11:34:20 -- common/autotest_common.sh@662 -- # es=33 00:06:50.336 11:34:20 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.336 11:34:20 -- common/autotest_common.sh@670 -- # es=1 00:06:50.336 11:34:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.336 00:06:50.336 real 0m0.361s 00:06:50.336 user 0m0.266s 00:06:50.336 sys 0m0.130s 00:06:50.336 11:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.336 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 END TEST accel_compress_verify 00:06:50.336 ************************************ 00:06:50.336 11:34:20 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.336 11:34:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.336 11:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.336 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 START TEST accel_wrong_workload 00:06:50.336 ************************************ 00:06:50.336 11:34:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:50.336 11:34:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.336 11:34:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.336 11:34:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.336 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.336 11:34:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.336 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.336 11:34:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:50.336 11:34:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.336 11:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.336 11:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.336 11:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.336 11:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.336 11:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.337 11:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.337 11:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.337 11:34:20 -- accel/accel.sh@42 -- # jq -r . 00:06:50.337 Unsupported workload type: foobar 00:06:50.337 [2024-12-03 11:34:20.913515] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.337 accel_perf options: 00:06:50.337 [-h help message] 00:06:50.337 [-q queue depth per core] 00:06:50.337 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.337 [-T number of threads per core 00:06:50.337 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.337 [-t time in seconds] 00:06:50.337 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.337 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.337 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.337 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.337 [-S for crc32c workload, use this seed value (default 0) 00:06:50.337 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.337 [-f for fill workload, use this BYTE value (default 255) 00:06:50.337 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.337 [-y verify result if this switch is on] 00:06:50.337 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.337 Can be used to spread operations across a wider range of memory. 00:06:50.337 11:34:20 -- common/autotest_common.sh@653 -- # es=1 00:06:50.337 11:34:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.337 11:34:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.337 11:34:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.337 00:06:50.337 real 0m0.038s 00:06:50.337 user 0m0.025s 00:06:50.337 sys 0m0.013s 00:06:50.337 11:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.337 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.337 ************************************ 00:06:50.337 END TEST accel_wrong_workload 00:06:50.337 ************************************ 00:06:50.337 Error: writing output failed: Broken pipe 00:06:50.595 11:34:20 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.595 11:34:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.595 11:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.595 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.595 ************************************ 00:06:50.595 START TEST accel_negative_buffers 00:06:50.595 ************************************ 00:06:50.595 11:34:20 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.595 11:34:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.595 11:34:20 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:50.595 11:34:20 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.595 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.595 11:34:20 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.595 11:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.595 11:34:20 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:50.596 11:34:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:50.596 11:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.596 11:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.596 11:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.596 11:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.596 11:34:20 -- accel/accel.sh@42 -- # jq -r . 00:06:50.596 -x option must be non-negative. 00:06:50.596 [2024-12-03 11:34:20.979248] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:50.596 accel_perf options: 00:06:50.596 [-h help message] 00:06:50.596 [-q queue depth per core] 00:06:50.596 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.596 [-T number of threads per core 00:06:50.596 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.596 [-t time in seconds] 00:06:50.596 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.596 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.596 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.596 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.596 [-S for crc32c workload, use this seed value (default 0) 00:06:50.596 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.596 [-f for fill workload, use this BYTE value (default 255) 00:06:50.596 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.596 [-y verify result if this switch is on] 00:06:50.596 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.596 Can be used to spread operations across a wider range of memory. 00:06:50.596 11:34:20 -- common/autotest_common.sh@653 -- # es=1 00:06:50.596 11:34:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.596 11:34:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.596 11:34:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.596 00:06:50.596 real 0m0.023s 00:06:50.596 user 0m0.012s 00:06:50.596 sys 0m0.011s 00:06:50.596 11:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.596 11:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.596 ************************************ 00:06:50.596 END TEST accel_negative_buffers 00:06:50.596 ************************************ 00:06:50.596 Error: writing output failed: Broken pipe 00:06:50.596 11:34:21 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:50.596 11:34:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.596 11:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.596 11:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.596 ************************************ 00:06:50.596 START TEST accel_crc32c 00:06:50.596 ************************************ 00:06:50.596 11:34:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:50.596 11:34:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.596 11:34:21 -- accel/accel.sh@17 -- # local accel_module 00:06:50.596 11:34:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:50.596 11:34:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:50.596 11:34:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.596 11:34:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.596 11:34:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.596 11:34:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.596 11:34:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.596 11:34:21 -- accel/accel.sh@42 -- # jq -r . 00:06:50.596 [2024-12-03 11:34:21.057880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.596 [2024-12-03 11:34:21.057936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587006 ] 00:06:50.596 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.596 [2024-12-03 11:34:21.128100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.596 [2024-12-03 11:34:21.194580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.970 11:34:22 -- accel/accel.sh@18 -- # out=' 00:06:51.970 SPDK Configuration: 00:06:51.970 Core mask: 0x1 00:06:51.970 00:06:51.970 Accel Perf Configuration: 00:06:51.970 Workload Type: crc32c 00:06:51.970 CRC-32C seed: 32 00:06:51.970 Transfer size: 4096 bytes 00:06:51.970 Vector count 1 00:06:51.970 Module: software 00:06:51.970 Queue depth: 32 00:06:51.970 Allocate depth: 32 00:06:51.970 # threads/core: 1 00:06:51.970 Run time: 1 seconds 00:06:51.970 Verify: Yes 00:06:51.970 00:06:51.970 Running for 1 seconds... 00:06:51.970 00:06:51.970 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.970 ------------------------------------------------------------------------------------ 00:06:51.970 0,0 596672/s 2330 MiB/s 0 0 00:06:51.970 ==================================================================================== 00:06:51.970 Total 596672/s 2330 MiB/s 0 0' 00:06:51.970 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:51.970 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:51.970 11:34:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.970 11:34:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.970 11:34:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.970 11:34:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.970 11:34:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.970 11:34:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.970 11:34:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.970 11:34:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.970 11:34:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.970 11:34:22 -- accel/accel.sh@42 -- # jq -r . 00:06:51.970 [2024-12-03 11:34:22.413926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.970 [2024-12-03 11:34:22.413998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587278 ] 00:06:51.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.970 [2024-12-03 11:34:22.481824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.970 [2024-12-03 11:34:22.545420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=0x1 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=crc32c 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=32 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=software 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=32 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=32 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=1 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val=Yes 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.228 11:34:22 -- accel/accel.sh@21 -- # val= 00:06:52.228 11:34:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.228 11:34:22 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:34:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:34:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:34:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.162 11:34:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:53.162 11:34:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.162 00:06:53.162 real 0m2.713s 00:06:53.162 user 0m2.468s 00:06:53.162 sys 0m0.254s 00:06:53.162 11:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.162 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.162 ************************************ 00:06:53.162 END TEST accel_crc32c 00:06:53.162 ************************************ 00:06:53.421 11:34:23 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:53.421 11:34:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:53.421 11:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.421 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.421 ************************************ 00:06:53.421 START TEST accel_crc32c_C2 00:06:53.421 ************************************ 00:06:53.421 11:34:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:53.421 11:34:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.421 11:34:23 -- accel/accel.sh@17 -- # local accel_module 00:06:53.421 11:34:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:53.421 11:34:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:53.421 11:34:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.421 11:34:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.421 11:34:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.421 11:34:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.421 11:34:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.421 11:34:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.421 11:34:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.421 11:34:23 -- accel/accel.sh@42 -- # jq -r . 00:06:53.421 [2024-12-03 11:34:23.815809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.421 [2024-12-03 11:34:23.815899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587561 ] 00:06:53.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.421 [2024-12-03 11:34:23.885795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.421 [2024-12-03 11:34:23.951469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.792 11:34:25 -- accel/accel.sh@18 -- # out=' 00:06:54.792 SPDK Configuration: 00:06:54.792 Core mask: 0x1 00:06:54.792 00:06:54.792 Accel Perf Configuration: 00:06:54.792 Workload Type: crc32c 00:06:54.792 CRC-32C seed: 0 00:06:54.792 Transfer size: 4096 bytes 00:06:54.792 Vector count 2 00:06:54.792 Module: software 00:06:54.792 Queue depth: 32 00:06:54.792 Allocate depth: 32 00:06:54.792 # threads/core: 1 00:06:54.792 Run time: 1 seconds 00:06:54.792 Verify: Yes 00:06:54.792 00:06:54.792 Running for 1 seconds... 00:06:54.792 00:06:54.792 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.792 ------------------------------------------------------------------------------------ 00:06:54.792 0,0 476992/s 3726 MiB/s 0 0 00:06:54.792 ==================================================================================== 00:06:54.792 Total 476992/s 1863 MiB/s 0 0' 00:06:54.792 11:34:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.792 11:34:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.792 11:34:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.792 11:34:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.792 11:34:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.792 11:34:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.792 11:34:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.792 11:34:25 -- accel/accel.sh@42 -- # jq -r . 00:06:54.792 [2024-12-03 11:34:25.157693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.792 [2024-12-03 11:34:25.157750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587831 ] 00:06:54.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.792 [2024-12-03 11:34:25.220620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.792 [2024-12-03 11:34:25.285736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=0x1 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=crc32c 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=0 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=software 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=32 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=32 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=1 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val=Yes 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:54.792 11:34:25 -- accel/accel.sh@21 -- # val= 00:06:54.792 11:34:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # IFS=: 00:06:54.792 11:34:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@21 -- # val= 00:06:56.164 11:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.164 11:34:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.164 11:34:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.164 11:34:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:56.164 11:34:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.164 00:06:56.164 real 0m2.698s 00:06:56.164 user 0m2.448s 00:06:56.164 sys 0m0.259s 00:06:56.164 11:34:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.164 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 ************************************ 00:06:56.164 END TEST accel_crc32c_C2 00:06:56.164 ************************************ 00:06:56.164 11:34:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.164 11:34:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.164 11:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.164 11:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.164 ************************************ 00:06:56.164 START TEST accel_copy 00:06:56.164 ************************************ 00:06:56.164 11:34:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:56.164 11:34:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.164 11:34:26 -- accel/accel.sh@17 -- # local accel_module 00:06:56.164 11:34:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:56.164 11:34:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.164 11:34:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.164 11:34:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.164 11:34:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.164 11:34:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.164 11:34:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.164 11:34:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.164 11:34:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.164 11:34:26 -- accel/accel.sh@42 -- # jq -r . 00:06:56.164 [2024-12-03 11:34:26.557268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.164 [2024-12-03 11:34:26.557338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588041 ] 00:06:56.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.164 [2024-12-03 11:34:26.626441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.164 [2024-12-03 11:34:26.693738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.657 11:34:27 -- accel/accel.sh@18 -- # out=' 00:06:57.657 SPDK Configuration: 00:06:57.657 Core mask: 0x1 00:06:57.657 00:06:57.657 Accel Perf Configuration: 00:06:57.657 Workload Type: copy 00:06:57.657 Transfer size: 4096 bytes 00:06:57.657 Vector count 1 00:06:57.657 Module: software 00:06:57.657 Queue depth: 32 00:06:57.657 Allocate depth: 32 00:06:57.657 # threads/core: 1 00:06:57.657 Run time: 1 seconds 00:06:57.657 Verify: Yes 00:06:57.657 00:06:57.657 Running for 1 seconds... 00:06:57.657 00:06:57.657 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.657 ------------------------------------------------------------------------------------ 00:06:57.657 0,0 446624/s 1744 MiB/s 0 0 00:06:57.657 ==================================================================================== 00:06:57.657 Total 446624/s 1744 MiB/s 0 0' 00:06:57.657 11:34:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:57.657 11:34:27 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:27 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:57.657 11:34:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.657 11:34:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.657 11:34:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.657 11:34:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.657 11:34:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.657 11:34:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.657 11:34:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.657 11:34:27 -- accel/accel.sh@42 -- # jq -r . 00:06:57.657 [2024-12-03 11:34:27.898584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.657 [2024-12-03 11:34:27.898642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588219 ] 00:06:57.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.657 [2024-12-03 11:34:27.965918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.657 [2024-12-03 11:34:28.029657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=0x1 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=copy 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=software 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=32 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=32 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=1 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val=Yes 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.657 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.657 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.657 11:34:28 -- accel/accel.sh@21 -- # val= 00:06:57.658 11:34:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.658 11:34:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.658 11:34:28 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 11:34:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 11:34:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 11:34:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.032 11:34:29 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:59.032 11:34:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.033 00:06:59.033 real 0m2.699s 00:06:59.033 user 0m2.466s 00:06:59.033 sys 0m0.239s 00:06:59.033 11:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.033 11:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 END TEST accel_copy 00:06:59.033 ************************************ 00:06:59.033 11:34:29 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.033 11:34:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:59.033 11:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.033 11:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.033 ************************************ 00:06:59.033 START TEST accel_fill 00:06:59.033 ************************************ 00:06:59.033 11:34:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.033 11:34:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.033 11:34:29 -- accel/accel.sh@17 -- # local accel_module 00:06:59.033 11:34:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.033 11:34:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.033 11:34:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.033 11:34:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.033 11:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.033 11:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.033 11:34:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.033 11:34:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.033 11:34:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.033 11:34:29 -- accel/accel.sh@42 -- # jq -r . 00:06:59.033 [2024-12-03 11:34:29.295638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.033 [2024-12-03 11:34:29.295714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588432 ] 00:06:59.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.033 [2024-12-03 11:34:29.366349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.033 [2024-12-03 11:34:29.432368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.415 11:34:30 -- accel/accel.sh@18 -- # out=' 00:07:00.415 SPDK Configuration: 00:07:00.415 Core mask: 0x1 00:07:00.415 00:07:00.415 Accel Perf Configuration: 00:07:00.415 Workload Type: fill 00:07:00.415 Fill pattern: 0x80 00:07:00.415 Transfer size: 4096 bytes 00:07:00.415 Vector count 1 00:07:00.415 Module: software 00:07:00.415 Queue depth: 64 00:07:00.415 Allocate depth: 64 00:07:00.415 # threads/core: 1 00:07:00.415 Run time: 1 seconds 00:07:00.415 Verify: Yes 00:07:00.415 00:07:00.415 Running for 1 seconds... 00:07:00.415 00:07:00.415 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.415 ------------------------------------------------------------------------------------ 00:07:00.415 0,0 673024/s 2629 MiB/s 0 0 00:07:00.415 ==================================================================================== 00:07:00.415 Total 673024/s 2629 MiB/s 0 0' 00:07:00.415 11:34:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.415 11:34:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.415 11:34:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.415 11:34:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.415 11:34:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.415 11:34:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.415 11:34:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.415 11:34:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.415 11:34:30 -- accel/accel.sh@42 -- # jq -r . 00:07:00.415 [2024-12-03 11:34:30.640936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.415 [2024-12-03 11:34:30.640993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588699 ] 00:07:00.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.415 [2024-12-03 11:34:30.708448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.415 [2024-12-03 11:34:30.773037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val=0x1 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val=fill 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.415 11:34:30 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.415 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.415 11:34:30 -- accel/accel.sh@21 -- # val=0x80 00:07:00.415 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val=software 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val=64 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val=64 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val=1 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val=Yes 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.416 11:34:30 -- accel/accel.sh@21 -- # val= 00:07:00.416 11:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.416 11:34:30 -- accel/accel.sh@20 -- # read -r var val 00:07:01.789 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@21 -- # val= 00:07:01.790 11:34:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # IFS=: 00:07:01.790 11:34:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.790 11:34:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.790 11:34:31 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:01.790 11:34:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.790 00:07:01.790 real 0m2.706s 00:07:01.790 user 0m2.470s 00:07:01.790 sys 0m0.245s 00:07:01.790 11:34:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.790 11:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:01.790 ************************************ 00:07:01.790 END TEST accel_fill 00:07:01.790 ************************************ 00:07:01.790 11:34:32 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:01.790 11:34:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:01.790 11:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.790 11:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.790 ************************************ 00:07:01.790 START TEST accel_copy_crc32c 00:07:01.790 ************************************ 00:07:01.790 11:34:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:01.790 11:34:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.790 11:34:32 -- accel/accel.sh@17 -- # local accel_module 00:07:01.790 11:34:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:01.790 11:34:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:01.790 11:34:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.790 11:34:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.790 11:34:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.790 11:34:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.790 11:34:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.790 11:34:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.790 11:34:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.790 11:34:32 -- accel/accel.sh@42 -- # jq -r . 00:07:01.790 [2024-12-03 11:34:32.045906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.790 [2024-12-03 11:34:32.045977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588985 ] 00:07:01.790 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.790 [2024-12-03 11:34:32.115050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.790 [2024-12-03 11:34:32.181290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.165 11:34:33 -- accel/accel.sh@18 -- # out=' 00:07:03.165 SPDK Configuration: 00:07:03.165 Core mask: 0x1 00:07:03.165 00:07:03.165 Accel Perf Configuration: 00:07:03.165 Workload Type: copy_crc32c 00:07:03.165 CRC-32C seed: 0 00:07:03.165 Vector size: 4096 bytes 00:07:03.165 Transfer size: 4096 bytes 00:07:03.165 Vector count 1 00:07:03.165 Module: software 00:07:03.165 Queue depth: 32 00:07:03.165 Allocate depth: 32 00:07:03.165 # threads/core: 1 00:07:03.165 Run time: 1 seconds 00:07:03.165 Verify: Yes 00:07:03.165 00:07:03.165 Running for 1 seconds... 00:07:03.165 00:07:03.165 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.165 ------------------------------------------------------------------------------------ 00:07:03.165 0,0 339104/s 1324 MiB/s 0 0 00:07:03.165 ==================================================================================== 00:07:03.165 Total 339104/s 1324 MiB/s 0 0' 00:07:03.165 11:34:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.165 11:34:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.165 11:34:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.165 11:34:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.165 11:34:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.165 11:34:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.165 11:34:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.165 11:34:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.165 11:34:33 -- accel/accel.sh@42 -- # jq -r . 00:07:03.165 [2024-12-03 11:34:33.388963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.165 [2024-12-03 11:34:33.389020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589253 ] 00:07:03.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.165 [2024-12-03 11:34:33.456406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.165 [2024-12-03 11:34:33.520073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val=0x1 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val=0 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.165 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.165 11:34:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.165 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val=software 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val=32 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val=32 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val=1 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val=Yes 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.166 11:34:33 -- accel/accel.sh@21 -- # val= 00:07:03.166 11:34:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.166 11:34:33 -- accel/accel.sh@20 -- # read -r var val 00:07:04.101 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.101 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.101 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.101 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.101 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.101 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.101 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.101 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.101 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.359 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.359 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.359 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.359 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.359 11:34:34 -- accel/accel.sh@21 -- # val= 00:07:04.359 11:34:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.359 11:34:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.359 11:34:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.359 11:34:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.359 11:34:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:04.359 11:34:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.359 00:07:04.359 real 0m2.701s 00:07:04.359 user 0m2.466s 00:07:04.359 sys 0m0.243s 00:07:04.359 11:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.359 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:07:04.359 ************************************ 00:07:04.359 END TEST accel_copy_crc32c 00:07:04.359 ************************************ 00:07:04.359 11:34:34 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.359 11:34:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.359 11:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.359 11:34:34 -- common/autotest_common.sh@10 -- # set +x 00:07:04.359 ************************************ 00:07:04.359 START TEST accel_copy_crc32c_C2 00:07:04.359 ************************************ 00:07:04.359 11:34:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.359 11:34:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.359 11:34:34 -- accel/accel.sh@17 -- # local accel_module 00:07:04.359 11:34:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.359 11:34:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.359 11:34:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.359 11:34:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.359 11:34:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.359 11:34:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.359 11:34:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.359 11:34:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.359 11:34:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.359 11:34:34 -- accel/accel.sh@42 -- # jq -r . 00:07:04.359 [2024-12-03 11:34:34.786331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.359 [2024-12-03 11:34:34.786401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589540 ] 00:07:04.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.359 [2024-12-03 11:34:34.854796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.359 [2024-12-03 11:34:34.919963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.734 11:34:36 -- accel/accel.sh@18 -- # out=' 00:07:05.734 SPDK Configuration: 00:07:05.734 Core mask: 0x1 00:07:05.734 00:07:05.734 Accel Perf Configuration: 00:07:05.734 Workload Type: copy_crc32c 00:07:05.734 CRC-32C seed: 0 00:07:05.734 Vector size: 4096 bytes 00:07:05.734 Transfer size: 8192 bytes 00:07:05.734 Vector count 2 00:07:05.734 Module: software 00:07:05.734 Queue depth: 32 00:07:05.734 Allocate depth: 32 00:07:05.734 # threads/core: 1 00:07:05.734 Run time: 1 seconds 00:07:05.734 Verify: Yes 00:07:05.734 00:07:05.734 Running for 1 seconds... 00:07:05.734 00:07:05.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.734 ------------------------------------------------------------------------------------ 00:07:05.734 0,0 246784/s 1928 MiB/s 0 0 00:07:05.734 ==================================================================================== 00:07:05.734 Total 246784/s 964 MiB/s 0 0' 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.734 11:34:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.734 11:34:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.734 11:34:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.734 11:34:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.734 11:34:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.734 11:34:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.734 11:34:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.734 11:34:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.734 11:34:36 -- accel/accel.sh@42 -- # jq -r . 00:07:05.734 [2024-12-03 11:34:36.140530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.734 [2024-12-03 11:34:36.140602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589808 ] 00:07:05.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.734 [2024-12-03 11:34:36.208467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.734 [2024-12-03 11:34:36.272446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=0x1 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=0 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=software 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=32 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=32 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=1 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val=Yes 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:05.734 11:34:36 -- accel/accel.sh@21 -- # val= 00:07:05.734 11:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # IFS=: 00:07:05.734 11:34:36 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@21 -- # val= 00:07:07.105 11:34:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.105 11:34:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.105 11:34:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.105 11:34:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:07.105 11:34:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.105 00:07:07.105 real 0m2.712s 00:07:07.105 user 0m2.472s 00:07:07.105 sys 0m0.249s 00:07:07.105 11:34:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.105 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.105 ************************************ 00:07:07.105 END TEST accel_copy_crc32c_C2 00:07:07.105 ************************************ 00:07:07.105 11:34:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:07.105 11:34:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:07.105 11:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.105 11:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.105 ************************************ 00:07:07.105 START TEST accel_dualcast 00:07:07.105 ************************************ 00:07:07.105 11:34:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:07.105 11:34:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.105 11:34:37 -- accel/accel.sh@17 -- # local accel_module 00:07:07.105 11:34:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:07.105 11:34:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.105 11:34:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.105 11:34:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.105 11:34:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.105 11:34:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.105 11:34:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.105 11:34:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.105 11:34:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.105 11:34:37 -- accel/accel.sh@42 -- # jq -r . 00:07:07.105 [2024-12-03 11:34:37.544867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.105 [2024-12-03 11:34:37.544940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590076 ] 00:07:07.105 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.105 [2024-12-03 11:34:37.614340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.105 [2024-12-03 11:34:37.680018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.475 11:34:38 -- accel/accel.sh@18 -- # out=' 00:07:08.475 SPDK Configuration: 00:07:08.475 Core mask: 0x1 00:07:08.475 00:07:08.475 Accel Perf Configuration: 00:07:08.475 Workload Type: dualcast 00:07:08.475 Transfer size: 4096 bytes 00:07:08.475 Vector count 1 00:07:08.475 Module: software 00:07:08.475 Queue depth: 32 00:07:08.475 Allocate depth: 32 00:07:08.475 # threads/core: 1 00:07:08.475 Run time: 1 seconds 00:07:08.475 Verify: Yes 00:07:08.475 00:07:08.475 Running for 1 seconds... 00:07:08.475 00:07:08.475 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.475 ------------------------------------------------------------------------------------ 00:07:08.475 0,0 531104/s 2074 MiB/s 0 0 00:07:08.475 ==================================================================================== 00:07:08.475 Total 531104/s 2074 MiB/s 0 0' 00:07:08.475 11:34:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:08.475 11:34:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.475 11:34:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.475 11:34:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.475 11:34:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.475 11:34:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.475 11:34:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.475 11:34:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.475 11:34:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.475 11:34:38 -- accel/accel.sh@42 -- # jq -r . 00:07:08.475 [2024-12-03 11:34:38.900052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.475 [2024-12-03 11:34:38.900156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590268 ] 00:07:08.475 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.475 [2024-12-03 11:34:38.970730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.475 [2024-12-03 11:34:39.038965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val=0x1 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val=dualcast 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.475 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.475 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.475 11:34:39 -- accel/accel.sh@21 -- # val=software 00:07:08.731 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.731 11:34:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.731 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.731 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.731 11:34:39 -- accel/accel.sh@21 -- # val=32 00:07:08.731 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.731 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.731 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.731 11:34:39 -- accel/accel.sh@21 -- # val=32 00:07:08.731 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.731 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.732 11:34:39 -- accel/accel.sh@21 -- # val=1 00:07:08.732 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.732 11:34:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.732 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.732 11:34:39 -- accel/accel.sh@21 -- # val=Yes 00:07:08.732 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.732 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.732 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:08.732 11:34:39 -- accel/accel.sh@21 -- # val= 00:07:08.732 11:34:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:08.732 11:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@21 -- # val= 00:07:09.666 11:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:09.666 11:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.666 11:34:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.666 11:34:40 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:09.666 11:34:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.666 00:07:09.666 real 0m2.724s 00:07:09.666 user 0m2.478s 00:07:09.666 sys 0m0.254s 00:07:09.666 11:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.666 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.666 ************************************ 00:07:09.666 END TEST accel_dualcast 00:07:09.666 ************************************ 00:07:09.925 11:34:40 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:09.925 11:34:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.925 11:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.925 11:34:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.925 ************************************ 00:07:09.925 START TEST accel_compare 00:07:09.925 ************************************ 00:07:09.925 11:34:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:09.925 11:34:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.925 11:34:40 -- accel/accel.sh@17 -- # local accel_module 00:07:09.925 11:34:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:09.925 11:34:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.925 11:34:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.925 11:34:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.925 11:34:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.925 11:34:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.925 11:34:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.925 11:34:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.925 11:34:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.925 11:34:40 -- accel/accel.sh@42 -- # jq -r . 00:07:09.925 [2024-12-03 11:34:40.312932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.925 [2024-12-03 11:34:40.313001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590497 ] 00:07:09.925 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.925 [2024-12-03 11:34:40.383482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.925 [2024-12-03 11:34:40.451236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.301 11:34:41 -- accel/accel.sh@18 -- # out=' 00:07:11.301 SPDK Configuration: 00:07:11.301 Core mask: 0x1 00:07:11.301 00:07:11.301 Accel Perf Configuration: 00:07:11.301 Workload Type: compare 00:07:11.301 Transfer size: 4096 bytes 00:07:11.301 Vector count 1 00:07:11.301 Module: software 00:07:11.301 Queue depth: 32 00:07:11.301 Allocate depth: 32 00:07:11.301 # threads/core: 1 00:07:11.301 Run time: 1 seconds 00:07:11.301 Verify: Yes 00:07:11.301 00:07:11.301 Running for 1 seconds... 00:07:11.301 00:07:11.301 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.301 ------------------------------------------------------------------------------------ 00:07:11.301 0,0 644832/s 2518 MiB/s 0 0 00:07:11.301 ==================================================================================== 00:07:11.301 Total 644832/s 2518 MiB/s 0 0' 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:11.301 11:34:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.301 11:34:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.301 11:34:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.301 11:34:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:11.301 11:34:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.301 11:34:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.301 11:34:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.301 11:34:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.301 11:34:41 -- accel/accel.sh@42 -- # jq -r . 00:07:11.301 [2024-12-03 11:34:41.668912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.301 [2024-12-03 11:34:41.668986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590680 ] 00:07:11.301 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.301 [2024-12-03 11:34:41.738134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.301 [2024-12-03 11:34:41.806807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val=0x1 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val=compare 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.301 11:34:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.301 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.301 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val=software 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val=32 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val=32 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val=1 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val=Yes 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.302 11:34:41 -- accel/accel.sh@21 -- # val= 00:07:11.302 11:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.302 11:34:41 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:42 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:42 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:42 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:42 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:43 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:43 -- accel/accel.sh@21 -- # val= 00:07:12.676 11:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.676 11:34:43 -- accel/accel.sh@20 -- # IFS=: 00:07:12.676 11:34:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.676 11:34:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.676 11:34:43 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:12.676 11:34:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.676 00:07:12.676 real 0m2.720s 00:07:12.676 user 0m2.472s 00:07:12.676 sys 0m0.256s 00:07:12.676 11:34:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.676 11:34:43 -- common/autotest_common.sh@10 -- # set +x 00:07:12.676 ************************************ 00:07:12.676 END TEST accel_compare 00:07:12.676 ************************************ 00:07:12.676 11:34:43 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:12.676 11:34:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:12.676 11:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.676 11:34:43 -- common/autotest_common.sh@10 -- # set +x 00:07:12.676 ************************************ 00:07:12.676 START TEST accel_xor 00:07:12.676 ************************************ 00:07:12.676 11:34:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:12.676 11:34:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.676 11:34:43 -- accel/accel.sh@17 -- # local accel_module 00:07:12.676 11:34:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:12.676 11:34:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:12.676 11:34:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.676 11:34:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.676 11:34:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.676 11:34:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.676 11:34:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.676 11:34:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.676 11:34:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.676 11:34:43 -- accel/accel.sh@42 -- # jq -r . 00:07:12.676 [2024-12-03 11:34:43.075776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.676 [2024-12-03 11:34:43.075857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590962 ] 00:07:12.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.676 [2024-12-03 11:34:43.146453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.676 [2024-12-03 11:34:43.213834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.050 11:34:44 -- accel/accel.sh@18 -- # out=' 00:07:14.050 SPDK Configuration: 00:07:14.050 Core mask: 0x1 00:07:14.050 00:07:14.050 Accel Perf Configuration: 00:07:14.050 Workload Type: xor 00:07:14.050 Source buffers: 2 00:07:14.050 Transfer size: 4096 bytes 00:07:14.050 Vector count 1 00:07:14.050 Module: software 00:07:14.050 Queue depth: 32 00:07:14.050 Allocate depth: 32 00:07:14.050 # threads/core: 1 00:07:14.050 Run time: 1 seconds 00:07:14.050 Verify: Yes 00:07:14.050 00:07:14.050 Running for 1 seconds... 00:07:14.050 00:07:14.050 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.050 ------------------------------------------------------------------------------------ 00:07:14.050 0,0 498016/s 1945 MiB/s 0 0 00:07:14.050 ==================================================================================== 00:07:14.050 Total 498016/s 1945 MiB/s 0 0' 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:14.050 11:34:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.050 11:34:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.050 11:34:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.050 11:34:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:14.050 11:34:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.050 11:34:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.050 11:34:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.050 11:34:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.050 11:34:44 -- accel/accel.sh@42 -- # jq -r . 00:07:14.050 [2024-12-03 11:34:44.423511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.050 [2024-12-03 11:34:44.423571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591230 ] 00:07:14.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.050 [2024-12-03 11:34:44.491413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.050 [2024-12-03 11:34:44.555567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val=0x1 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val=xor 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val=2 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.050 11:34:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.050 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.050 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val=software 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val=32 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val=32 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val=1 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val=Yes 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:14.051 11:34:44 -- accel/accel.sh@21 -- # val= 00:07:14.051 11:34:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # IFS=: 00:07:14.051 11:34:44 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@21 -- # val= 00:07:15.422 11:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # IFS=: 00:07:15.422 11:34:45 -- accel/accel.sh@20 -- # read -r var val 00:07:15.422 11:34:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.422 11:34:45 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:15.422 11:34:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.422 00:07:15.422 real 0m2.706s 00:07:15.422 user 0m2.461s 00:07:15.422 sys 0m0.251s 00:07:15.422 11:34:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.422 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.422 ************************************ 00:07:15.422 END TEST accel_xor 00:07:15.422 ************************************ 00:07:15.422 11:34:45 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:15.422 11:34:45 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:15.422 11:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.422 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.422 ************************************ 00:07:15.422 START TEST accel_xor 00:07:15.422 ************************************ 00:07:15.422 11:34:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:15.422 11:34:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.422 11:34:45 -- accel/accel.sh@17 -- # local accel_module 00:07:15.422 11:34:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:15.422 11:34:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:15.422 11:34:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.422 11:34:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.422 11:34:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.422 11:34:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.422 11:34:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.422 11:34:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.422 11:34:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.422 11:34:45 -- accel/accel.sh@42 -- # jq -r . 00:07:15.422 [2024-12-03 11:34:45.823383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.422 [2024-12-03 11:34:45.823451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591516 ] 00:07:15.422 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.422 [2024-12-03 11:34:45.890897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.422 [2024-12-03 11:34:45.953615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.798 11:34:47 -- accel/accel.sh@18 -- # out=' 00:07:16.798 SPDK Configuration: 00:07:16.798 Core mask: 0x1 00:07:16.798 00:07:16.798 Accel Perf Configuration: 00:07:16.798 Workload Type: xor 00:07:16.798 Source buffers: 3 00:07:16.798 Transfer size: 4096 bytes 00:07:16.798 Vector count 1 00:07:16.798 Module: software 00:07:16.798 Queue depth: 32 00:07:16.798 Allocate depth: 32 00:07:16.798 # threads/core: 1 00:07:16.798 Run time: 1 seconds 00:07:16.798 Verify: Yes 00:07:16.798 00:07:16.798 Running for 1 seconds... 00:07:16.798 00:07:16.798 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.798 ------------------------------------------------------------------------------------ 00:07:16.798 0,0 466560/s 1822 MiB/s 0 0 00:07:16.798 ==================================================================================== 00:07:16.798 Total 466560/s 1822 MiB/s 0 0' 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:16.798 11:34:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.798 11:34:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.798 11:34:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.798 11:34:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:16.798 11:34:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.798 11:34:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.798 11:34:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.798 11:34:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.798 11:34:47 -- accel/accel.sh@42 -- # jq -r . 00:07:16.798 [2024-12-03 11:34:47.173671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.798 [2024-12-03 11:34:47.173744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591789 ] 00:07:16.798 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.798 [2024-12-03 11:34:47.241926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.798 [2024-12-03 11:34:47.305650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val=0x1 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.798 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.798 11:34:47 -- accel/accel.sh@21 -- # val=xor 00:07:16.798 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.798 11:34:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=3 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=software 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=32 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=32 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=1 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val=Yes 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:16.799 11:34:47 -- accel/accel.sh@21 -- # val= 00:07:16.799 11:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # IFS=: 00:07:16.799 11:34:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@21 -- # val= 00:07:18.174 11:34:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # IFS=: 00:07:18.174 11:34:48 -- accel/accel.sh@20 -- # read -r var val 00:07:18.174 11:34:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.174 11:34:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:18.174 11:34:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.174 00:07:18.174 real 0m2.707s 00:07:18.174 user 0m2.460s 00:07:18.174 sys 0m0.256s 00:07:18.174 11:34:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.174 11:34:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.174 ************************************ 00:07:18.174 END TEST accel_xor 00:07:18.174 ************************************ 00:07:18.174 11:34:48 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:18.174 11:34:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:18.174 11:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.174 11:34:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.174 ************************************ 00:07:18.174 START TEST accel_dif_verify 00:07:18.174 ************************************ 00:07:18.174 11:34:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:18.174 11:34:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.174 11:34:48 -- accel/accel.sh@17 -- # local accel_module 00:07:18.174 11:34:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:18.174 11:34:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:18.174 11:34:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.174 11:34:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.174 11:34:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.174 11:34:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.174 11:34:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.174 11:34:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.174 11:34:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.174 11:34:48 -- accel/accel.sh@42 -- # jq -r . 00:07:18.174 [2024-12-03 11:34:48.578444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.174 [2024-12-03 11:34:48.578514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592071 ] 00:07:18.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.174 [2024-12-03 11:34:48.647361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.174 [2024-12-03 11:34:48.712506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.549 11:34:49 -- accel/accel.sh@18 -- # out=' 00:07:19.549 SPDK Configuration: 00:07:19.549 Core mask: 0x1 00:07:19.549 00:07:19.549 Accel Perf Configuration: 00:07:19.549 Workload Type: dif_verify 00:07:19.549 Vector size: 4096 bytes 00:07:19.549 Transfer size: 4096 bytes 00:07:19.549 Block size: 512 bytes 00:07:19.549 Metadata size: 8 bytes 00:07:19.549 Vector count 1 00:07:19.549 Module: software 00:07:19.549 Queue depth: 32 00:07:19.549 Allocate depth: 32 00:07:19.549 # threads/core: 1 00:07:19.549 Run time: 1 seconds 00:07:19.549 Verify: No 00:07:19.549 00:07:19.549 Running for 1 seconds... 00:07:19.549 00:07:19.549 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.549 ------------------------------------------------------------------------------------ 00:07:19.549 0,0 137056/s 543 MiB/s 0 0 00:07:19.549 ==================================================================================== 00:07:19.549 Total 137056/s 535 MiB/s 0 0' 00:07:19.549 11:34:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.549 11:34:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.549 11:34:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.549 11:34:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.549 11:34:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.549 11:34:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.549 11:34:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.549 11:34:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.549 11:34:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.549 11:34:49 -- accel/accel.sh@42 -- # jq -r . 00:07:19.549 [2024-12-03 11:34:49.932486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.549 [2024-12-03 11:34:49.932559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592337 ] 00:07:19.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.549 [2024-12-03 11:34:50.001491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.549 [2024-12-03 11:34:50.080409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=0x1 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=dif_verify 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=software 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=32 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=32 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=1 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val=No 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.549 11:34:50 -- accel/accel.sh@21 -- # val= 00:07:19.549 11:34:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.549 11:34:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.550 11:34:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.942 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.942 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.943 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.943 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.943 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.943 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@21 -- # val= 00:07:20.943 11:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # IFS=: 00:07:20.943 11:34:51 -- accel/accel.sh@20 -- # read -r var val 00:07:20.943 11:34:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.943 11:34:51 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:20.943 11:34:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.943 00:07:20.943 real 0m2.731s 00:07:20.943 user 0m2.487s 00:07:20.943 sys 0m0.252s 00:07:20.943 11:34:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.943 11:34:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.943 ************************************ 00:07:20.943 END TEST accel_dif_verify 00:07:20.943 ************************************ 00:07:20.943 11:34:51 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.943 11:34:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:20.943 11:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.943 11:34:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.943 ************************************ 00:07:20.943 START TEST accel_dif_generate 00:07:20.943 ************************************ 00:07:20.943 11:34:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:20.943 11:34:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.943 11:34:51 -- accel/accel.sh@17 -- # local accel_module 00:07:20.943 11:34:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:20.943 11:34:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.943 11:34:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.943 11:34:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.943 11:34:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.943 11:34:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.943 11:34:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.943 11:34:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.943 11:34:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.943 11:34:51 -- accel/accel.sh@42 -- # jq -r . 00:07:20.943 [2024-12-03 11:34:51.355400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.943 [2024-12-03 11:34:51.355471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592565 ] 00:07:20.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.943 [2024-12-03 11:34:51.425025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.943 [2024-12-03 11:34:51.490574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.316 11:34:52 -- accel/accel.sh@18 -- # out=' 00:07:22.316 SPDK Configuration: 00:07:22.316 Core mask: 0x1 00:07:22.316 00:07:22.316 Accel Perf Configuration: 00:07:22.316 Workload Type: dif_generate 00:07:22.316 Vector size: 4096 bytes 00:07:22.316 Transfer size: 4096 bytes 00:07:22.316 Block size: 512 bytes 00:07:22.316 Metadata size: 8 bytes 00:07:22.316 Vector count 1 00:07:22.316 Module: software 00:07:22.316 Queue depth: 32 00:07:22.316 Allocate depth: 32 00:07:22.316 # threads/core: 1 00:07:22.316 Run time: 1 seconds 00:07:22.316 Verify: No 00:07:22.316 00:07:22.316 Running for 1 seconds... 00:07:22.316 00:07:22.316 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.316 ------------------------------------------------------------------------------------ 00:07:22.316 0,0 166944/s 662 MiB/s 0 0 00:07:22.316 ==================================================================================== 00:07:22.316 Total 166944/s 652 MiB/s 0 0' 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:22.316 11:34:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.316 11:34:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.316 11:34:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.316 11:34:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:22.316 11:34:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.316 11:34:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.316 11:34:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.316 11:34:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.316 11:34:52 -- accel/accel.sh@42 -- # jq -r . 00:07:22.316 [2024-12-03 11:34:52.710579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.316 [2024-12-03 11:34:52.710651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592750 ] 00:07:22.316 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.316 [2024-12-03 11:34:52.779184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.316 [2024-12-03 11:34:52.843705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=0x1 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=dif_generate 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=software 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=32 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=32 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=1 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val=No 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.316 11:34:52 -- accel/accel.sh@21 -- # val= 00:07:22.316 11:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.316 11:34:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.317 11:34:52 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@21 -- # val= 00:07:23.692 11:34:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.692 11:34:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.692 11:34:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.692 11:34:54 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:23.692 11:34:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.692 00:07:23.692 real 0m2.716s 00:07:23.692 user 0m2.468s 00:07:23.692 sys 0m0.257s 00:07:23.692 11:34:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.692 11:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.692 ************************************ 00:07:23.692 END TEST accel_dif_generate 00:07:23.692 ************************************ 00:07:23.692 11:34:54 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:23.692 11:34:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:23.692 11:34:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.692 11:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.692 ************************************ 00:07:23.692 START TEST accel_dif_generate_copy 00:07:23.692 ************************************ 00:07:23.692 11:34:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:23.692 11:34:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.692 11:34:54 -- accel/accel.sh@17 -- # local accel_module 00:07:23.692 11:34:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.692 11:34:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.692 11:34:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.692 11:34:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.692 11:34:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.692 11:34:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.692 11:34:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.693 11:34:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.693 11:34:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.693 11:34:54 -- accel/accel.sh@42 -- # jq -r . 00:07:23.693 [2024-12-03 11:34:54.106004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.693 [2024-12-03 11:34:54.106077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592952 ] 00:07:23.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.693 [2024-12-03 11:34:54.174551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.693 [2024-12-03 11:34:54.240551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.069 11:34:55 -- accel/accel.sh@18 -- # out=' 00:07:25.069 SPDK Configuration: 00:07:25.069 Core mask: 0x1 00:07:25.069 00:07:25.069 Accel Perf Configuration: 00:07:25.069 Workload Type: dif_generate_copy 00:07:25.069 Vector size: 4096 bytes 00:07:25.069 Transfer size: 4096 bytes 00:07:25.069 Vector count 1 00:07:25.069 Module: software 00:07:25.069 Queue depth: 32 00:07:25.069 Allocate depth: 32 00:07:25.069 # threads/core: 1 00:07:25.069 Run time: 1 seconds 00:07:25.069 Verify: No 00:07:25.069 00:07:25.069 Running for 1 seconds... 00:07:25.069 00:07:25.069 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.069 ------------------------------------------------------------------------------------ 00:07:25.069 0,0 127360/s 505 MiB/s 0 0 00:07:25.069 ==================================================================================== 00:07:25.069 Total 127360/s 497 MiB/s 0 0' 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:25.069 11:34:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.069 11:34:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.069 11:34:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:25.069 11:34:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.069 11:34:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.069 11:34:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.069 11:34:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.069 11:34:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.069 11:34:55 -- accel/accel.sh@42 -- # jq -r . 00:07:25.069 [2024-12-03 11:34:55.459330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.069 [2024-12-03 11:34:55.459399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593205 ] 00:07:25.069 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.069 [2024-12-03 11:34:55.527692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.069 [2024-12-03 11:34:55.591959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=0x1 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=software 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=32 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=32 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=1 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val=No 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.069 11:34:55 -- accel/accel.sh@21 -- # val= 00:07:25.069 11:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.069 11:34:55 -- accel/accel.sh@20 -- # read -r var val 00:07:26.445 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.445 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.445 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.445 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.445 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.445 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.445 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.445 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.445 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.445 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.445 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.446 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.446 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.446 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.446 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.446 11:34:56 -- accel/accel.sh@21 -- # val= 00:07:26.446 11:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.446 11:34:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.446 11:34:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.446 11:34:56 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:26.446 11:34:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.446 00:07:26.446 real 0m2.710s 00:07:26.446 user 0m2.461s 00:07:26.446 sys 0m0.258s 00:07:26.446 11:34:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.446 11:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 ************************************ 00:07:26.446 END TEST accel_dif_generate_copy 00:07:26.446 ************************************ 00:07:26.446 11:34:56 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:26.446 11:34:56 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.446 11:34:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:26.446 11:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.446 11:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.446 ************************************ 00:07:26.446 START TEST accel_comp 00:07:26.446 ************************************ 00:07:26.446 11:34:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.446 11:34:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.446 11:34:56 -- accel/accel.sh@17 -- # local accel_module 00:07:26.446 11:34:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.446 11:34:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.446 11:34:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.446 11:34:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.446 11:34:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.446 11:34:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.446 11:34:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.446 11:34:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.446 11:34:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.446 11:34:56 -- accel/accel.sh@42 -- # jq -r . 00:07:26.446 [2024-12-03 11:34:56.853616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.446 [2024-12-03 11:34:56.853685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593488 ] 00:07:26.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.446 [2024-12-03 11:34:56.921911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.446 [2024-12-03 11:34:56.987299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.822 11:34:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.822 00:07:27.822 SPDK Configuration: 00:07:27.822 Core mask: 0x1 00:07:27.822 00:07:27.822 Accel Perf Configuration: 00:07:27.822 Workload Type: compress 00:07:27.822 Transfer size: 4096 bytes 00:07:27.822 Vector count 1 00:07:27.822 Module: software 00:07:27.822 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.822 Queue depth: 32 00:07:27.822 Allocate depth: 32 00:07:27.822 # threads/core: 1 00:07:27.822 Run time: 1 seconds 00:07:27.822 Verify: No 00:07:27.822 00:07:27.822 Running for 1 seconds... 00:07:27.822 00:07:27.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.822 ------------------------------------------------------------------------------------ 00:07:27.822 0,0 63776/s 265 MiB/s 0 0 00:07:27.822 ==================================================================================== 00:07:27.822 Total 63776/s 249 MiB/s 0 0' 00:07:27.822 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.822 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.822 11:34:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.822 11:34:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.822 11:34:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.822 11:34:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.822 11:34:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.822 11:34:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.823 11:34:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.823 11:34:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.823 11:34:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.823 11:34:58 -- accel/accel.sh@42 -- # jq -r . 00:07:27.823 [2024-12-03 11:34:58.199285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.823 [2024-12-03 11:34:58.199349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593755 ] 00:07:27.823 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.823 [2024-12-03 11:34:58.267208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.823 [2024-12-03 11:34:58.332827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=0x1 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=compress 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=software 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=32 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=32 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=1 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val=No 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:27.823 11:34:58 -- accel/accel.sh@21 -- # val= 00:07:27.823 11:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # IFS=: 00:07:27.823 11:34:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@21 -- # val= 00:07:29.200 11:34:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.200 11:34:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.200 11:34:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.200 11:34:59 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:29.200 11:34:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.200 00:07:29.200 real 0m2.707s 00:07:29.200 user 0m2.467s 00:07:29.200 sys 0m0.248s 00:07:29.200 11:34:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.200 11:34:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.200 ************************************ 00:07:29.200 END TEST accel_comp 00:07:29.200 ************************************ 00:07:29.200 11:34:59 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.200 11:34:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:29.200 11:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.200 11:34:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.200 ************************************ 00:07:29.200 START TEST accel_decomp 00:07:29.200 ************************************ 00:07:29.200 11:34:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.200 11:34:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.200 11:34:59 -- accel/accel.sh@17 -- # local accel_module 00:07:29.200 11:34:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.200 11:34:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.200 11:34:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.200 11:34:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.200 11:34:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.200 11:34:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.200 11:34:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.200 11:34:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.200 11:34:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.200 11:34:59 -- accel/accel.sh@42 -- # jq -r . 00:07:29.200 [2024-12-03 11:34:59.598896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.201 [2024-12-03 11:34:59.598986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594048 ] 00:07:29.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.201 [2024-12-03 11:34:59.668061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.201 [2024-12-03 11:34:59.732642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.576 11:35:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.576 00:07:30.576 SPDK Configuration: 00:07:30.577 Core mask: 0x1 00:07:30.577 00:07:30.577 Accel Perf Configuration: 00:07:30.577 Workload Type: decompress 00:07:30.577 Transfer size: 4096 bytes 00:07:30.577 Vector count 1 00:07:30.577 Module: software 00:07:30.577 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.577 Queue depth: 32 00:07:30.577 Allocate depth: 32 00:07:30.577 # threads/core: 1 00:07:30.577 Run time: 1 seconds 00:07:30.577 Verify: Yes 00:07:30.577 00:07:30.577 Running for 1 seconds... 00:07:30.577 00:07:30.577 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.577 ------------------------------------------------------------------------------------ 00:07:30.577 0,0 83040/s 153 MiB/s 0 0 00:07:30.577 ==================================================================================== 00:07:30.577 Total 83040/s 324 MiB/s 0 0' 00:07:30.577 11:35:00 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:00 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.577 11:35:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.577 11:35:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.577 11:35:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.577 11:35:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.577 11:35:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.577 11:35:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.577 11:35:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.577 11:35:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.577 11:35:00 -- accel/accel.sh@42 -- # jq -r . 00:07:30.577 [2024-12-03 11:35:00.955723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.577 [2024-12-03 11:35:00.955795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594318 ] 00:07:30.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.577 [2024-12-03 11:35:01.024391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.577 [2024-12-03 11:35:01.088042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=0x1 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=decompress 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=software 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=32 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=32 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=1 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val=Yes 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.577 11:35:01 -- accel/accel.sh@21 -- # val= 00:07:30.577 11:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.577 11:35:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@21 -- # val= 00:07:31.952 11:35:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # IFS=: 00:07:31.952 11:35:02 -- accel/accel.sh@20 -- # read -r var val 00:07:31.952 11:35:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.952 11:35:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.952 11:35:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.952 00:07:31.952 real 0m2.719s 00:07:31.952 user 0m2.466s 00:07:31.952 sys 0m0.262s 00:07:31.952 11:35:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.952 11:35:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.952 ************************************ 00:07:31.952 END TEST accel_decomp 00:07:31.952 ************************************ 00:07:31.952 11:35:02 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.952 11:35:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:31.952 11:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.952 11:35:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.953 ************************************ 00:07:31.953 START TEST accel_decmop_full 00:07:31.953 ************************************ 00:07:31.953 11:35:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.953 11:35:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.953 11:35:02 -- accel/accel.sh@17 -- # local accel_module 00:07:31.953 11:35:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.953 11:35:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.953 11:35:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.953 11:35:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.953 11:35:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.953 11:35:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.953 11:35:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.953 11:35:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.953 11:35:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.953 11:35:02 -- accel/accel.sh@42 -- # jq -r . 00:07:31.953 [2024-12-03 11:35:02.360140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.953 [2024-12-03 11:35:02.360209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594601 ] 00:07:31.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.953 [2024-12-03 11:35:02.429011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.953 [2024-12-03 11:35:02.493981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.331 11:35:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.331 00:07:33.331 SPDK Configuration: 00:07:33.331 Core mask: 0x1 00:07:33.331 00:07:33.331 Accel Perf Configuration: 00:07:33.331 Workload Type: decompress 00:07:33.331 Transfer size: 111250 bytes 00:07:33.331 Vector count 1 00:07:33.331 Module: software 00:07:33.331 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:33.331 Queue depth: 32 00:07:33.331 Allocate depth: 32 00:07:33.331 # threads/core: 1 00:07:33.331 Run time: 1 seconds 00:07:33.331 Verify: Yes 00:07:33.331 00:07:33.331 Running for 1 seconds... 00:07:33.331 00:07:33.331 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.331 ------------------------------------------------------------------------------------ 00:07:33.331 0,0 5760/s 237 MiB/s 0 0 00:07:33.331 ==================================================================================== 00:07:33.331 Total 5760/s 611 MiB/s 0 0' 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.331 11:35:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.331 11:35:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.331 11:35:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.331 11:35:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.331 11:35:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.331 11:35:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.331 11:35:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.331 11:35:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.331 11:35:03 -- accel/accel.sh@42 -- # jq -r . 00:07:33.331 [2024-12-03 11:35:03.724466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.331 [2024-12-03 11:35:03.724542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594850 ] 00:07:33.331 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.331 [2024-12-03 11:35:03.793116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.331 [2024-12-03 11:35:03.857016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=0x1 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=decompress 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=software 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=32 00:07:33.331 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 11:35:03 -- accel/accel.sh@21 -- # val=32 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.332 11:35:03 -- accel/accel.sh@21 -- # val=1 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.332 11:35:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.332 11:35:03 -- accel/accel.sh@21 -- # val=Yes 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.332 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:33.332 11:35:03 -- accel/accel.sh@21 -- # val= 00:07:33.332 11:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # IFS=: 00:07:33.332 11:35:03 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@21 -- # val= 00:07:34.757 11:35:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.757 11:35:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.757 11:35:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.757 11:35:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.757 11:35:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.757 00:07:34.757 real 0m2.735s 00:07:34.757 user 0m2.495s 00:07:34.757 sys 0m0.248s 00:07:34.757 11:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.757 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.757 ************************************ 00:07:34.757 END TEST accel_decmop_full 00:07:34.757 ************************************ 00:07:34.757 11:35:05 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.757 11:35:05 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:34.757 11:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.757 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.757 ************************************ 00:07:34.757 START TEST accel_decomp_mcore 00:07:34.757 ************************************ 00:07:34.757 11:35:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.757 11:35:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.757 11:35:05 -- accel/accel.sh@17 -- # local accel_module 00:07:34.757 11:35:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.757 11:35:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.757 11:35:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.757 11:35:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.757 11:35:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.757 11:35:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.757 11:35:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.757 11:35:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.757 11:35:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.757 11:35:05 -- accel/accel.sh@42 -- # jq -r . 00:07:34.757 [2024-12-03 11:35:05.139013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.757 [2024-12-03 11:35:05.139080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595111 ] 00:07:34.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.757 [2024-12-03 11:35:05.210456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.757 [2024-12-03 11:35:05.279819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.757 [2024-12-03 11:35:05.279915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.757 [2024-12-03 11:35:05.279989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.757 [2024-12-03 11:35:05.279991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.129 11:35:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.129 00:07:36.129 SPDK Configuration: 00:07:36.129 Core mask: 0xf 00:07:36.129 00:07:36.129 Accel Perf Configuration: 00:07:36.129 Workload Type: decompress 00:07:36.129 Transfer size: 4096 bytes 00:07:36.129 Vector count 1 00:07:36.129 Module: software 00:07:36.129 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:36.129 Queue depth: 32 00:07:36.129 Allocate depth: 32 00:07:36.129 # threads/core: 1 00:07:36.129 Run time: 1 seconds 00:07:36.129 Verify: Yes 00:07:36.129 00:07:36.129 Running for 1 seconds... 00:07:36.129 00:07:36.129 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.129 ------------------------------------------------------------------------------------ 00:07:36.129 0,0 69504/s 128 MiB/s 0 0 00:07:36.130 3,0 73440/s 135 MiB/s 0 0 00:07:36.130 2,0 73376/s 135 MiB/s 0 0 00:07:36.130 1,0 73536/s 135 MiB/s 0 0 00:07:36.130 ==================================================================================== 00:07:36.130 Total 289856/s 1132 MiB/s 0 0' 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.130 11:35:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.130 11:35:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.130 11:35:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.130 11:35:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.130 11:35:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.130 11:35:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.130 11:35:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.130 11:35:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.130 11:35:06 -- accel/accel.sh@42 -- # jq -r . 00:07:36.130 [2024-12-03 11:35:06.510418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.130 [2024-12-03 11:35:06.510506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595302 ] 00:07:36.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.130 [2024-12-03 11:35:06.581876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.130 [2024-12-03 11:35:06.649546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.130 [2024-12-03 11:35:06.649640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.130 [2024-12-03 11:35:06.649722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.130 [2024-12-03 11:35:06.649725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=0xf 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=decompress 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=software 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=32 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=32 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=1 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val=Yes 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.130 11:35:06 -- accel/accel.sh@21 -- # val= 00:07:36.130 11:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.130 11:35:06 -- accel/accel.sh@20 -- # read -r var val 00:07:37.504 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.504 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.504 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.504 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.504 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.504 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.504 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@21 -- # val= 00:07:37.505 11:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # IFS=: 00:07:37.505 11:35:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.505 11:35:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.505 11:35:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.505 11:35:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.505 00:07:37.505 real 0m2.751s 00:07:37.505 user 0m9.152s 00:07:37.505 sys 0m0.272s 00:07:37.505 11:35:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.505 11:35:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.505 ************************************ 00:07:37.505 END TEST accel_decomp_mcore 00:07:37.505 ************************************ 00:07:37.505 11:35:07 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.505 11:35:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:37.505 11:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.505 11:35:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.505 ************************************ 00:07:37.505 START TEST accel_decomp_full_mcore 00:07:37.505 ************************************ 00:07:37.505 11:35:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.505 11:35:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.505 11:35:07 -- accel/accel.sh@17 -- # local accel_module 00:07:37.505 11:35:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.505 11:35:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.505 11:35:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.505 11:35:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.505 11:35:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.505 11:35:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.505 11:35:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.505 11:35:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.505 11:35:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.505 11:35:07 -- accel/accel.sh@42 -- # jq -r . 00:07:37.505 [2024-12-03 11:35:07.935988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.505 [2024-12-03 11:35:07.936054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595525 ] 00:07:37.505 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.505 [2024-12-03 11:35:08.007616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.505 [2024-12-03 11:35:08.075780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.505 [2024-12-03 11:35:08.075884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.505 [2024-12-03 11:35:08.075947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.505 [2024-12-03 11:35:08.075949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.881 11:35:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.881 00:07:38.881 SPDK Configuration: 00:07:38.881 Core mask: 0xf 00:07:38.881 00:07:38.881 Accel Perf Configuration: 00:07:38.881 Workload Type: decompress 00:07:38.881 Transfer size: 111250 bytes 00:07:38.881 Vector count 1 00:07:38.881 Module: software 00:07:38.881 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:38.881 Queue depth: 32 00:07:38.881 Allocate depth: 32 00:07:38.881 # threads/core: 1 00:07:38.881 Run time: 1 seconds 00:07:38.881 Verify: Yes 00:07:38.881 00:07:38.881 Running for 1 seconds... 00:07:38.881 00:07:38.881 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.881 ------------------------------------------------------------------------------------ 00:07:38.881 0,0 5376/s 222 MiB/s 0 0 00:07:38.881 3,0 5696/s 235 MiB/s 0 0 00:07:38.881 2,0 5696/s 235 MiB/s 0 0 00:07:38.881 1,0 5696/s 235 MiB/s 0 0 00:07:38.881 ==================================================================================== 00:07:38.881 Total 22464/s 2383 MiB/s 0 0' 00:07:38.881 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.881 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.881 11:35:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.881 11:35:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.881 11:35:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:38.881 11:35:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.881 11:35:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.881 11:35:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.881 11:35:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.881 11:35:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.881 11:35:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.881 11:35:09 -- accel/accel.sh@42 -- # jq -r . 00:07:38.881 [2024-12-03 11:35:09.316003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.881 [2024-12-03 11:35:09.316076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595749 ] 00:07:38.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.881 [2024-12-03 11:35:09.386115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.881 [2024-12-03 11:35:09.453882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.881 [2024-12-03 11:35:09.453978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.881 [2024-12-03 11:35:09.454042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.881 [2024-12-03 11:35:09.454043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=0xf 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=decompress 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=software 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=32 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=32 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=1 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val=Yes 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.140 11:35:09 -- accel/accel.sh@21 -- # val= 00:07:39.140 11:35:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.140 11:35:09 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@21 -- # val= 00:07:40.077 11:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.077 11:35:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.077 11:35:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.077 11:35:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.077 11:35:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.077 00:07:40.077 real 0m2.768s 00:07:40.077 user 0m9.216s 00:07:40.077 sys 0m0.269s 00:07:40.077 11:35:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.077 11:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.077 ************************************ 00:07:40.077 END TEST accel_decomp_full_mcore 00:07:40.077 ************************************ 00:07:40.336 11:35:10 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:40.336 11:35:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:40.336 11:35:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.336 11:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.336 ************************************ 00:07:40.336 START TEST accel_decomp_mthread 00:07:40.336 ************************************ 00:07:40.336 11:35:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:40.336 11:35:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.336 11:35:10 -- accel/accel.sh@17 -- # local accel_module 00:07:40.336 11:35:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:40.336 11:35:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:40.336 11:35:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.336 11:35:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.336 11:35:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.336 11:35:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.336 11:35:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.336 11:35:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.336 11:35:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.336 11:35:10 -- accel/accel.sh@42 -- # jq -r . 00:07:40.336 [2024-12-03 11:35:10.748022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.336 [2024-12-03 11:35:10.748089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596040 ] 00:07:40.336 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.336 [2024-12-03 11:35:10.819950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.336 [2024-12-03 11:35:10.886369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.711 11:35:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.711 00:07:41.711 SPDK Configuration: 00:07:41.711 Core mask: 0x1 00:07:41.711 00:07:41.711 Accel Perf Configuration: 00:07:41.711 Workload Type: decompress 00:07:41.711 Transfer size: 4096 bytes 00:07:41.711 Vector count 1 00:07:41.711 Module: software 00:07:41.711 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.711 Queue depth: 32 00:07:41.711 Allocate depth: 32 00:07:41.711 # threads/core: 2 00:07:41.711 Run time: 1 seconds 00:07:41.711 Verify: Yes 00:07:41.711 00:07:41.711 Running for 1 seconds... 00:07:41.711 00:07:41.711 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.711 ------------------------------------------------------------------------------------ 00:07:41.711 0,1 44000/s 81 MiB/s 0 0 00:07:41.711 0,0 43840/s 80 MiB/s 0 0 00:07:41.711 ==================================================================================== 00:07:41.711 Total 87840/s 343 MiB/s 0 0' 00:07:41.711 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.711 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.711 11:35:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.711 11:35:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.711 11:35:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.711 11:35:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.712 11:35:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.712 11:35:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.712 11:35:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.712 11:35:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.712 11:35:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.712 11:35:12 -- accel/accel.sh@42 -- # jq -r . 00:07:41.712 [2024-12-03 11:35:12.112906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.712 [2024-12-03 11:35:12.112994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596313 ] 00:07:41.712 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.712 [2024-12-03 11:35:12.182514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.712 [2024-12-03 11:35:12.247201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=0x1 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=decompress 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=software 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=32 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=32 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=2 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val=Yes 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.712 11:35:12 -- accel/accel.sh@21 -- # val= 00:07:41.712 11:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.712 11:35:12 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@21 -- # val= 00:07:43.087 11:35:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.087 11:35:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.087 11:35:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.087 11:35:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.087 11:35:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.087 00:07:43.087 real 0m2.732s 00:07:43.088 user 0m2.491s 00:07:43.088 sys 0m0.251s 00:07:43.088 11:35:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.088 11:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.088 ************************************ 00:07:43.088 END TEST accel_decomp_mthread 00:07:43.088 ************************************ 00:07:43.088 11:35:13 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.088 11:35:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.088 11:35:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.088 11:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.088 ************************************ 00:07:43.088 START TEST accel_deomp_full_mthread 00:07:43.088 ************************************ 00:07:43.088 11:35:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.088 11:35:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.088 11:35:13 -- accel/accel.sh@17 -- # local accel_module 00:07:43.088 11:35:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.088 11:35:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.088 11:35:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.088 11:35:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.088 11:35:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.088 11:35:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.088 11:35:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.088 11:35:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.088 11:35:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.088 11:35:13 -- accel/accel.sh@42 -- # jq -r . 00:07:43.088 [2024-12-03 11:35:13.518660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.088 [2024-12-03 11:35:13.518747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596594 ] 00:07:43.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.088 [2024-12-03 11:35:13.588701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.088 [2024-12-03 11:35:13.653397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.464 11:35:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.464 00:07:44.464 SPDK Configuration: 00:07:44.464 Core mask: 0x1 00:07:44.464 00:07:44.464 Accel Perf Configuration: 00:07:44.464 Workload Type: decompress 00:07:44.464 Transfer size: 111250 bytes 00:07:44.464 Vector count 1 00:07:44.464 Module: software 00:07:44.464 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:44.464 Queue depth: 32 00:07:44.464 Allocate depth: 32 00:07:44.464 # threads/core: 2 00:07:44.464 Run time: 1 seconds 00:07:44.464 Verify: Yes 00:07:44.464 00:07:44.464 Running for 1 seconds... 00:07:44.464 00:07:44.464 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.464 ------------------------------------------------------------------------------------ 00:07:44.464 0,1 2880/s 118 MiB/s 0 0 00:07:44.464 0,0 2816/s 116 MiB/s 0 0 00:07:44.464 ==================================================================================== 00:07:44.464 Total 5696/s 604 MiB/s 0 0' 00:07:44.464 11:35:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.464 11:35:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.464 11:35:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.464 11:35:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.464 11:35:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.464 11:35:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.464 11:35:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.464 11:35:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.464 11:35:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.464 11:35:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.464 11:35:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.464 11:35:14 -- accel/accel.sh@42 -- # jq -r . 00:07:44.464 [2024-12-03 11:35:14.899184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.464 [2024-12-03 11:35:14.899256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596868 ] 00:07:44.464 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.464 [2024-12-03 11:35:14.967172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.464 [2024-12-03 11:35:15.030809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.464 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.464 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.464 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.464 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.464 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.464 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.464 11:35:15 -- accel/accel.sh@21 -- # val=0x1 00:07:44.464 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.464 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=decompress 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=software 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=32 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=32 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=2 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.721 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.721 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.721 11:35:15 -- accel/accel.sh@21 -- # val=Yes 00:07:44.722 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.722 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.722 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.722 11:35:15 -- accel/accel.sh@21 -- # val= 00:07:44.722 11:35:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.722 11:35:15 -- accel/accel.sh@20 -- # read -r var val 00:07:45.652 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.652 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.652 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.652 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.652 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.652 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.652 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.652 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.652 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.652 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.652 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.653 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.653 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.653 11:35:16 -- accel/accel.sh@21 -- # val= 00:07:45.653 11:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.653 11:35:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.653 11:35:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.653 11:35:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.653 11:35:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.653 00:07:45.653 real 0m2.763s 00:07:45.653 user 0m2.514s 00:07:45.653 sys 0m0.257s 00:07:45.653 11:35:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.653 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.653 ************************************ 00:07:45.653 END TEST accel_deomp_full_mthread 00:07:45.653 ************************************ 00:07:45.910 11:35:16 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:45.910 11:35:16 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.910 11:35:16 -- accel/accel.sh@129 -- # build_accel_config 00:07:45.910 11:35:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.910 11:35:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.910 11:35:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.910 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.910 11:35:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.910 11:35:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.910 11:35:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.910 11:35:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.910 11:35:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.910 11:35:16 -- accel/accel.sh@42 -- # jq -r . 00:07:45.910 ************************************ 00:07:45.910 START TEST accel_dif_functional_tests 00:07:45.910 ************************************ 00:07:45.910 11:35:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.910 [2024-12-03 11:35:16.342227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.910 [2024-12-03 11:35:16.342279] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597150 ] 00:07:45.910 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.910 [2024-12-03 11:35:16.409455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.910 [2024-12-03 11:35:16.474024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.910 [2024-12-03 11:35:16.474125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.910 [2024-12-03 11:35:16.474128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.168 00:07:46.168 00:07:46.168 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.168 http://cunit.sourceforge.net/ 00:07:46.168 00:07:46.168 00:07:46.168 Suite: accel_dif 00:07:46.168 Test: verify: DIF generated, GUARD check ...passed 00:07:46.168 Test: verify: DIF generated, APPTAG check ...passed 00:07:46.168 Test: verify: DIF generated, REFTAG check ...passed 00:07:46.168 Test: verify: DIF not generated, GUARD check ...[2024-12-03 11:35:16.543180] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.168 [2024-12-03 11:35:16.543225] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.168 passed 00:07:46.168 Test: verify: DIF not generated, APPTAG check ...[2024-12-03 11:35:16.543257] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.168 [2024-12-03 11:35:16.543274] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.168 passed 00:07:46.168 Test: verify: DIF not generated, REFTAG check ...[2024-12-03 11:35:16.543295] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.168 [2024-12-03 11:35:16.543312] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.168 passed 00:07:46.168 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:46.169 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-03 11:35:16.543356] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:46.169 passed 00:07:46.169 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:46.169 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:46.169 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:46.169 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-03 11:35:16.543462] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:46.169 passed 00:07:46.169 Test: generate copy: DIF generated, GUARD check ...passed 00:07:46.169 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:46.169 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:46.169 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:46.169 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:46.169 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:46.169 Test: generate copy: iovecs-len validate ...[2024-12-03 11:35:16.543639] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:46.169 passed 00:07:46.169 Test: generate copy: buffer alignment validate ...passed 00:07:46.169 00:07:46.169 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.169 suites 1 1 n/a 0 0 00:07:46.169 tests 20 20 20 0 0 00:07:46.169 asserts 204 204 204 0 n/a 00:07:46.169 00:07:46.169 Elapsed time = 0.002 seconds 00:07:46.169 00:07:46.169 real 0m0.431s 00:07:46.169 user 0m0.637s 00:07:46.169 sys 0m0.153s 00:07:46.169 11:35:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.169 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.169 ************************************ 00:07:46.169 END TEST accel_dif_functional_tests 00:07:46.169 ************************************ 00:07:46.169 00:07:46.169 real 0m58.236s 00:07:46.169 user 1m6.074s 00:07:46.169 sys 0m6.863s 00:07:46.169 11:35:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.169 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.169 ************************************ 00:07:46.169 END TEST accel 00:07:46.169 ************************************ 00:07:46.427 11:35:16 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:46.427 11:35:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.427 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 ************************************ 00:07:46.427 START TEST accel_rpc 00:07:46.427 ************************************ 00:07:46.427 11:35:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:46.427 * Looking for test storage... 00:07:46.427 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:46.427 11:35:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:46.427 11:35:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:46.427 11:35:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:46.427 11:35:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:46.427 11:35:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:46.427 11:35:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:46.427 11:35:16 -- scripts/common.sh@335 -- # IFS=.-: 00:07:46.427 11:35:16 -- scripts/common.sh@335 -- # read -ra ver1 00:07:46.427 11:35:16 -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.427 11:35:16 -- scripts/common.sh@336 -- # read -ra ver2 00:07:46.427 11:35:16 -- scripts/common.sh@337 -- # local 'op=<' 00:07:46.427 11:35:16 -- scripts/common.sh@339 -- # ver1_l=2 00:07:46.427 11:35:16 -- scripts/common.sh@340 -- # ver2_l=1 00:07:46.427 11:35:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:46.427 11:35:16 -- scripts/common.sh@343 -- # case "$op" in 00:07:46.427 11:35:16 -- scripts/common.sh@344 -- # : 1 00:07:46.427 11:35:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:46.427 11:35:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.427 11:35:16 -- scripts/common.sh@364 -- # decimal 1 00:07:46.427 11:35:16 -- scripts/common.sh@352 -- # local d=1 00:07:46.427 11:35:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.427 11:35:16 -- scripts/common.sh@354 -- # echo 1 00:07:46.427 11:35:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:46.427 11:35:16 -- scripts/common.sh@365 -- # decimal 2 00:07:46.427 11:35:16 -- scripts/common.sh@352 -- # local d=2 00:07:46.427 11:35:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.427 11:35:16 -- scripts/common.sh@354 -- # echo 2 00:07:46.427 11:35:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:46.427 11:35:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:46.427 11:35:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:46.427 11:35:16 -- scripts/common.sh@367 -- # return 0 00:07:46.427 11:35:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.427 --rc genhtml_branch_coverage=1 00:07:46.427 --rc genhtml_function_coverage=1 00:07:46.427 --rc genhtml_legend=1 00:07:46.427 --rc geninfo_all_blocks=1 00:07:46.427 --rc geninfo_unexecuted_blocks=1 00:07:46.427 00:07:46.427 ' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.427 --rc genhtml_branch_coverage=1 00:07:46.427 --rc genhtml_function_coverage=1 00:07:46.427 --rc genhtml_legend=1 00:07:46.427 --rc geninfo_all_blocks=1 00:07:46.427 --rc geninfo_unexecuted_blocks=1 00:07:46.427 00:07:46.427 ' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.427 --rc genhtml_branch_coverage=1 00:07:46.427 --rc genhtml_function_coverage=1 00:07:46.427 --rc genhtml_legend=1 00:07:46.427 --rc geninfo_all_blocks=1 00:07:46.427 --rc geninfo_unexecuted_blocks=1 00:07:46.427 00:07:46.427 ' 00:07:46.427 11:35:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:46.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.427 --rc genhtml_branch_coverage=1 00:07:46.427 --rc genhtml_function_coverage=1 00:07:46.427 --rc genhtml_legend=1 00:07:46.427 --rc geninfo_all_blocks=1 00:07:46.427 --rc geninfo_unexecuted_blocks=1 00:07:46.428 00:07:46.428 ' 00:07:46.428 11:35:16 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:46.428 11:35:16 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3597232 00:07:46.428 11:35:16 -- accel/accel_rpc.sh@15 -- # waitforlisten 3597232 00:07:46.428 11:35:16 -- common/autotest_common.sh@829 -- # '[' -z 3597232 ']' 00:07:46.428 11:35:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.428 11:35:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.428 11:35:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.428 11:35:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.428 11:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.428 11:35:16 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:46.428 [2024-12-03 11:35:17.015053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.428 [2024-12-03 11:35:17.015116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597232 ] 00:07:46.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.685 [2024-12-03 11:35:17.082906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.685 [2024-12-03 11:35:17.150189] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.685 [2024-12-03 11:35:17.150307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.247 11:35:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.247 11:35:17 -- common/autotest_common.sh@862 -- # return 0 00:07:47.247 11:35:17 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:47.247 11:35:17 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:47.247 11:35:17 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:47.247 11:35:17 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:47.247 11:35:17 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:47.247 11:35:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.247 11:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.247 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.247 ************************************ 00:07:47.248 START TEST accel_assign_opcode 00:07:47.248 ************************************ 00:07:47.248 11:35:17 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:47.248 11:35:17 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:47.248 11:35:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.248 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.248 [2024-12-03 11:35:17.820310] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:47.248 11:35:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.248 11:35:17 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:47.248 11:35:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.248 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.248 [2024-12-03 11:35:17.828316] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:47.248 11:35:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.248 11:35:17 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:47.248 11:35:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.248 11:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 11:35:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.506 11:35:18 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:47.506 11:35:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.506 11:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 11:35:18 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:47.506 11:35:18 -- accel/accel_rpc.sh@42 -- # grep software 00:07:47.506 11:35:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.506 software 00:07:47.506 00:07:47.506 real 0m0.232s 00:07:47.506 user 0m0.034s 00:07:47.506 sys 0m0.010s 00:07:47.506 11:35:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.506 11:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:47.506 ************************************ 00:07:47.506 END TEST accel_assign_opcode 00:07:47.506 ************************************ 00:07:47.506 11:35:18 -- accel/accel_rpc.sh@55 -- # killprocess 3597232 00:07:47.506 11:35:18 -- common/autotest_common.sh@936 -- # '[' -z 3597232 ']' 00:07:47.506 11:35:18 -- common/autotest_common.sh@940 -- # kill -0 3597232 00:07:47.506 11:35:18 -- common/autotest_common.sh@941 -- # uname 00:07:47.506 11:35:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:47.506 11:35:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3597232 00:07:47.764 11:35:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:47.764 11:35:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:47.764 11:35:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3597232' 00:07:47.764 killing process with pid 3597232 00:07:47.764 11:35:18 -- common/autotest_common.sh@955 -- # kill 3597232 00:07:47.764 11:35:18 -- common/autotest_common.sh@960 -- # wait 3597232 00:07:48.023 00:07:48.023 real 0m1.656s 00:07:48.023 user 0m1.669s 00:07:48.023 sys 0m0.467s 00:07:48.023 11:35:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.023 11:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.023 ************************************ 00:07:48.023 END TEST accel_rpc 00:07:48.023 ************************************ 00:07:48.023 11:35:18 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.023 11:35:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.023 11:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.023 11:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.023 ************************************ 00:07:48.023 START TEST app_cmdline 00:07:48.023 ************************************ 00:07:48.023 11:35:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.023 * Looking for test storage... 00:07:48.023 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:48.023 11:35:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:48.023 11:35:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:48.023 11:35:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:48.282 11:35:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:48.282 11:35:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:48.282 11:35:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:48.282 11:35:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:48.282 11:35:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:48.282 11:35:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:48.282 11:35:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.282 11:35:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:48.282 11:35:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:48.282 11:35:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:48.282 11:35:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:48.282 11:35:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:48.282 11:35:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:48.282 11:35:18 -- scripts/common.sh@344 -- # : 1 00:07:48.282 11:35:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:48.282 11:35:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.282 11:35:18 -- scripts/common.sh@364 -- # decimal 1 00:07:48.282 11:35:18 -- scripts/common.sh@352 -- # local d=1 00:07:48.282 11:35:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.282 11:35:18 -- scripts/common.sh@354 -- # echo 1 00:07:48.282 11:35:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:48.282 11:35:18 -- scripts/common.sh@365 -- # decimal 2 00:07:48.282 11:35:18 -- scripts/common.sh@352 -- # local d=2 00:07:48.282 11:35:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.282 11:35:18 -- scripts/common.sh@354 -- # echo 2 00:07:48.282 11:35:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:48.282 11:35:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:48.282 11:35:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:48.282 11:35:18 -- scripts/common.sh@367 -- # return 0 00:07:48.282 11:35:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.282 11:35:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:48.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.282 --rc genhtml_branch_coverage=1 00:07:48.282 --rc genhtml_function_coverage=1 00:07:48.282 --rc genhtml_legend=1 00:07:48.282 --rc geninfo_all_blocks=1 00:07:48.282 --rc geninfo_unexecuted_blocks=1 00:07:48.282 00:07:48.282 ' 00:07:48.282 11:35:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:48.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.282 --rc genhtml_branch_coverage=1 00:07:48.282 --rc genhtml_function_coverage=1 00:07:48.282 --rc genhtml_legend=1 00:07:48.282 --rc geninfo_all_blocks=1 00:07:48.282 --rc geninfo_unexecuted_blocks=1 00:07:48.282 00:07:48.282 ' 00:07:48.282 11:35:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:48.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.282 --rc genhtml_branch_coverage=1 00:07:48.282 --rc genhtml_function_coverage=1 00:07:48.282 --rc genhtml_legend=1 00:07:48.282 --rc geninfo_all_blocks=1 00:07:48.282 --rc geninfo_unexecuted_blocks=1 00:07:48.282 00:07:48.282 ' 00:07:48.282 11:35:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:48.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.282 --rc genhtml_branch_coverage=1 00:07:48.282 --rc genhtml_function_coverage=1 00:07:48.282 --rc genhtml_legend=1 00:07:48.282 --rc geninfo_all_blocks=1 00:07:48.282 --rc geninfo_unexecuted_blocks=1 00:07:48.282 00:07:48.282 ' 00:07:48.282 11:35:18 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:48.282 11:35:18 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3597622 00:07:48.282 11:35:18 -- app/cmdline.sh@18 -- # waitforlisten 3597622 00:07:48.282 11:35:18 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:48.282 11:35:18 -- common/autotest_common.sh@829 -- # '[' -z 3597622 ']' 00:07:48.282 11:35:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.282 11:35:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.282 11:35:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.282 11:35:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.282 11:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.282 [2024-12-03 11:35:18.759226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.282 [2024-12-03 11:35:18.759280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597622 ] 00:07:48.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.283 [2024-12-03 11:35:18.828214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.283 [2024-12-03 11:35:18.894765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.283 [2024-12-03 11:35:18.894894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.217 11:35:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.217 11:35:19 -- common/autotest_common.sh@862 -- # return 0 00:07:49.217 11:35:19 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:49.217 { 00:07:49.217 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:49.217 "fields": { 00:07:49.217 "major": 24, 00:07:49.217 "minor": 1, 00:07:49.217 "patch": 1, 00:07:49.217 "suffix": "-pre", 00:07:49.217 "commit": "c13c99a5e" 00:07:49.217 } 00:07:49.217 } 00:07:49.217 11:35:19 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:49.217 11:35:19 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:49.217 11:35:19 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:49.217 11:35:19 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:49.217 11:35:19 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:49.217 11:35:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.217 11:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:49.217 11:35:19 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:49.217 11:35:19 -- app/cmdline.sh@26 -- # sort 00:07:49.217 11:35:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.217 11:35:19 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:49.217 11:35:19 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:49.217 11:35:19 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.217 11:35:19 -- common/autotest_common.sh@650 -- # local es=0 00:07:49.217 11:35:19 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.217 11:35:19 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:49.217 11:35:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.217 11:35:19 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:49.217 11:35:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.217 11:35:19 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:49.217 11:35:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.217 11:35:19 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:49.217 11:35:19 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:49.217 11:35:19 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.476 request: 00:07:49.476 { 00:07:49.476 "method": "env_dpdk_get_mem_stats", 00:07:49.476 "req_id": 1 00:07:49.476 } 00:07:49.476 Got JSON-RPC error response 00:07:49.476 response: 00:07:49.476 { 00:07:49.476 "code": -32601, 00:07:49.476 "message": "Method not found" 00:07:49.476 } 00:07:49.476 11:35:19 -- common/autotest_common.sh@653 -- # es=1 00:07:49.476 11:35:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.476 11:35:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.476 11:35:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.476 11:35:19 -- app/cmdline.sh@1 -- # killprocess 3597622 00:07:49.476 11:35:19 -- common/autotest_common.sh@936 -- # '[' -z 3597622 ']' 00:07:49.476 11:35:19 -- common/autotest_common.sh@940 -- # kill -0 3597622 00:07:49.476 11:35:19 -- common/autotest_common.sh@941 -- # uname 00:07:49.476 11:35:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.476 11:35:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3597622 00:07:49.476 11:35:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.476 11:35:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.476 11:35:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3597622' 00:07:49.476 killing process with pid 3597622 00:07:49.476 11:35:20 -- common/autotest_common.sh@955 -- # kill 3597622 00:07:49.476 11:35:20 -- common/autotest_common.sh@960 -- # wait 3597622 00:07:50.045 00:07:50.045 real 0m1.834s 00:07:50.045 user 0m2.113s 00:07:50.045 sys 0m0.508s 00:07:50.045 11:35:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.045 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.045 ************************************ 00:07:50.045 END TEST app_cmdline 00:07:50.045 ************************************ 00:07:50.045 11:35:20 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:50.045 11:35:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.045 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.045 ************************************ 00:07:50.045 START TEST version 00:07:50.045 ************************************ 00:07:50.045 11:35:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:50.045 * Looking for test storage... 00:07:50.045 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:50.045 11:35:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.045 11:35:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.045 11:35:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.045 11:35:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.045 11:35:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.045 11:35:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.045 11:35:20 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.045 11:35:20 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.045 11:35:20 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.045 11:35:20 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.045 11:35:20 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.045 11:35:20 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.045 11:35:20 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.045 11:35:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.045 11:35:20 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.045 11:35:20 -- scripts/common.sh@344 -- # : 1 00:07:50.045 11:35:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.045 11:35:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.045 11:35:20 -- scripts/common.sh@364 -- # decimal 1 00:07:50.045 11:35:20 -- scripts/common.sh@352 -- # local d=1 00:07:50.045 11:35:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.045 11:35:20 -- scripts/common.sh@354 -- # echo 1 00:07:50.045 11:35:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.045 11:35:20 -- scripts/common.sh@365 -- # decimal 2 00:07:50.045 11:35:20 -- scripts/common.sh@352 -- # local d=2 00:07:50.045 11:35:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.045 11:35:20 -- scripts/common.sh@354 -- # echo 2 00:07:50.045 11:35:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.045 11:35:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.045 11:35:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.045 11:35:20 -- scripts/common.sh@367 -- # return 0 00:07:50.045 11:35:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.045 --rc genhtml_branch_coverage=1 00:07:50.045 --rc genhtml_function_coverage=1 00:07:50.045 --rc genhtml_legend=1 00:07:50.045 --rc geninfo_all_blocks=1 00:07:50.045 --rc geninfo_unexecuted_blocks=1 00:07:50.045 00:07:50.045 ' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.045 --rc genhtml_branch_coverage=1 00:07:50.045 --rc genhtml_function_coverage=1 00:07:50.045 --rc genhtml_legend=1 00:07:50.045 --rc geninfo_all_blocks=1 00:07:50.045 --rc geninfo_unexecuted_blocks=1 00:07:50.045 00:07:50.045 ' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.045 --rc genhtml_branch_coverage=1 00:07:50.045 --rc genhtml_function_coverage=1 00:07:50.045 --rc genhtml_legend=1 00:07:50.045 --rc geninfo_all_blocks=1 00:07:50.045 --rc geninfo_unexecuted_blocks=1 00:07:50.045 00:07:50.045 ' 00:07:50.045 11:35:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.045 --rc genhtml_branch_coverage=1 00:07:50.045 --rc genhtml_function_coverage=1 00:07:50.045 --rc genhtml_legend=1 00:07:50.045 --rc geninfo_all_blocks=1 00:07:50.045 --rc geninfo_unexecuted_blocks=1 00:07:50.045 00:07:50.045 ' 00:07:50.045 11:35:20 -- app/version.sh@17 -- # get_header_version major 00:07:50.045 11:35:20 -- app/version.sh@14 -- # tr -d '"' 00:07:50.045 11:35:20 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:50.045 11:35:20 -- app/version.sh@14 -- # cut -f2 00:07:50.045 11:35:20 -- app/version.sh@17 -- # major=24 00:07:50.045 11:35:20 -- app/version.sh@18 -- # get_header_version minor 00:07:50.045 11:35:20 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:50.045 11:35:20 -- app/version.sh@14 -- # cut -f2 00:07:50.045 11:35:20 -- app/version.sh@14 -- # tr -d '"' 00:07:50.045 11:35:20 -- app/version.sh@18 -- # minor=1 00:07:50.045 11:35:20 -- app/version.sh@19 -- # get_header_version patch 00:07:50.045 11:35:20 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:50.045 11:35:20 -- app/version.sh@14 -- # cut -f2 00:07:50.045 11:35:20 -- app/version.sh@14 -- # tr -d '"' 00:07:50.045 11:35:20 -- app/version.sh@19 -- # patch=1 00:07:50.045 11:35:20 -- app/version.sh@20 -- # get_header_version suffix 00:07:50.045 11:35:20 -- app/version.sh@14 -- # cut -f2 00:07:50.045 11:35:20 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:50.045 11:35:20 -- app/version.sh@14 -- # tr -d '"' 00:07:50.045 11:35:20 -- app/version.sh@20 -- # suffix=-pre 00:07:50.045 11:35:20 -- app/version.sh@22 -- # version=24.1 00:07:50.046 11:35:20 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:50.046 11:35:20 -- app/version.sh@25 -- # version=24.1.1 00:07:50.046 11:35:20 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:50.046 11:35:20 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:50.046 11:35:20 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:50.304 11:35:20 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:50.304 11:35:20 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:50.304 00:07:50.304 real 0m0.263s 00:07:50.304 user 0m0.147s 00:07:50.304 sys 0m0.165s 00:07:50.304 11:35:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.304 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.304 ************************************ 00:07:50.304 END TEST version 00:07:50.304 ************************************ 00:07:50.304 11:35:20 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:50.304 11:35:20 -- spdk/autotest.sh@191 -- # uname -s 00:07:50.305 11:35:20 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:50.305 11:35:20 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:50.305 11:35:20 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:50.305 11:35:20 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:50.305 11:35:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.305 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 11:35:20 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:50.305 11:35:20 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:07:50.305 11:35:20 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:50.305 11:35:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.305 11:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.305 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 ************************************ 00:07:50.305 START TEST nvmf_rdma 00:07:50.305 ************************************ 00:07:50.305 11:35:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:50.305 * Looking for test storage... 00:07:50.305 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:50.305 11:35:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.305 11:35:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.305 11:35:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.564 11:35:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.564 11:35:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.564 11:35:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.564 11:35:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.564 11:35:20 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.564 11:35:20 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.564 11:35:20 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.564 11:35:20 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.564 11:35:20 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.564 11:35:20 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.564 11:35:20 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.564 11:35:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.564 11:35:20 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.564 11:35:20 -- scripts/common.sh@344 -- # : 1 00:07:50.564 11:35:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.564 11:35:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.564 11:35:20 -- scripts/common.sh@364 -- # decimal 1 00:07:50.564 11:35:20 -- scripts/common.sh@352 -- # local d=1 00:07:50.564 11:35:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.564 11:35:20 -- scripts/common.sh@354 -- # echo 1 00:07:50.564 11:35:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.564 11:35:20 -- scripts/common.sh@365 -- # decimal 2 00:07:50.564 11:35:20 -- scripts/common.sh@352 -- # local d=2 00:07:50.564 11:35:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.564 11:35:20 -- scripts/common.sh@354 -- # echo 2 00:07:50.564 11:35:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.564 11:35:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.564 11:35:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.564 11:35:20 -- scripts/common.sh@367 -- # return 0 00:07:50.565 11:35:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.565 11:35:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.565 --rc genhtml_branch_coverage=1 00:07:50.565 --rc genhtml_function_coverage=1 00:07:50.565 --rc genhtml_legend=1 00:07:50.565 --rc geninfo_all_blocks=1 00:07:50.565 --rc geninfo_unexecuted_blocks=1 00:07:50.565 00:07:50.565 ' 00:07:50.565 11:35:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.565 --rc genhtml_branch_coverage=1 00:07:50.565 --rc genhtml_function_coverage=1 00:07:50.565 --rc genhtml_legend=1 00:07:50.565 --rc geninfo_all_blocks=1 00:07:50.565 --rc geninfo_unexecuted_blocks=1 00:07:50.565 00:07:50.565 ' 00:07:50.565 11:35:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.565 --rc genhtml_branch_coverage=1 00:07:50.565 --rc genhtml_function_coverage=1 00:07:50.565 --rc genhtml_legend=1 00:07:50.565 --rc geninfo_all_blocks=1 00:07:50.565 --rc geninfo_unexecuted_blocks=1 00:07:50.565 00:07:50.565 ' 00:07:50.565 11:35:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.565 --rc genhtml_branch_coverage=1 00:07:50.565 --rc genhtml_function_coverage=1 00:07:50.565 --rc genhtml_legend=1 00:07:50.565 --rc geninfo_all_blocks=1 00:07:50.565 --rc geninfo_unexecuted_blocks=1 00:07:50.565 00:07:50.565 ' 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.565 11:35:20 -- nvmf/common.sh@7 -- # uname -s 00:07:50.565 11:35:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.565 11:35:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.565 11:35:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.565 11:35:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.565 11:35:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.565 11:35:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.565 11:35:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.565 11:35:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.565 11:35:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.565 11:35:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.565 11:35:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:50.565 11:35:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:50.565 11:35:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.565 11:35:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.565 11:35:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.565 11:35:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.565 11:35:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.565 11:35:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.565 11:35:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.565 11:35:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.565 11:35:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.565 11:35:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.565 11:35:20 -- paths/export.sh@5 -- # export PATH 00:07:50.565 11:35:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.565 11:35:20 -- nvmf/common.sh@46 -- # : 0 00:07:50.565 11:35:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:50.565 11:35:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:50.565 11:35:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:50.565 11:35:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.565 11:35:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.565 11:35:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:50.565 11:35:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:50.565 11:35:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:50.565 11:35:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.565 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:50.565 11:35:20 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:50.565 11:35:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.565 11:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.565 11:35:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.565 ************************************ 00:07:50.565 START TEST nvmf_example 00:07:50.565 ************************************ 00:07:50.565 11:35:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:50.565 * Looking for test storage... 00:07:50.565 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.565 11:35:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.565 11:35:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.565 11:35:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.565 11:35:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.565 11:35:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.565 11:35:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.565 11:35:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.565 11:35:21 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.565 11:35:21 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.565 11:35:21 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.565 11:35:21 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.565 11:35:21 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.565 11:35:21 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.565 11:35:21 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.565 11:35:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.565 11:35:21 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.565 11:35:21 -- scripts/common.sh@344 -- # : 1 00:07:50.565 11:35:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.565 11:35:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.825 11:35:21 -- scripts/common.sh@364 -- # decimal 1 00:07:50.825 11:35:21 -- scripts/common.sh@352 -- # local d=1 00:07:50.825 11:35:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.825 11:35:21 -- scripts/common.sh@354 -- # echo 1 00:07:50.825 11:35:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.825 11:35:21 -- scripts/common.sh@365 -- # decimal 2 00:07:50.825 11:35:21 -- scripts/common.sh@352 -- # local d=2 00:07:50.825 11:35:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.825 11:35:21 -- scripts/common.sh@354 -- # echo 2 00:07:50.825 11:35:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.825 11:35:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.825 11:35:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.825 11:35:21 -- scripts/common.sh@367 -- # return 0 00:07:50.825 11:35:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.825 11:35:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.825 --rc genhtml_branch_coverage=1 00:07:50.825 --rc genhtml_function_coverage=1 00:07:50.825 --rc genhtml_legend=1 00:07:50.825 --rc geninfo_all_blocks=1 00:07:50.825 --rc geninfo_unexecuted_blocks=1 00:07:50.825 00:07:50.825 ' 00:07:50.825 11:35:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.825 --rc genhtml_branch_coverage=1 00:07:50.825 --rc genhtml_function_coverage=1 00:07:50.825 --rc genhtml_legend=1 00:07:50.825 --rc geninfo_all_blocks=1 00:07:50.825 --rc geninfo_unexecuted_blocks=1 00:07:50.825 00:07:50.825 ' 00:07:50.825 11:35:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.825 --rc genhtml_branch_coverage=1 00:07:50.825 --rc genhtml_function_coverage=1 00:07:50.825 --rc genhtml_legend=1 00:07:50.825 --rc geninfo_all_blocks=1 00:07:50.825 --rc geninfo_unexecuted_blocks=1 00:07:50.825 00:07:50.825 ' 00:07:50.825 11:35:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.825 --rc genhtml_branch_coverage=1 00:07:50.825 --rc genhtml_function_coverage=1 00:07:50.825 --rc genhtml_legend=1 00:07:50.825 --rc geninfo_all_blocks=1 00:07:50.825 --rc geninfo_unexecuted_blocks=1 00:07:50.825 00:07:50.825 ' 00:07:50.825 11:35:21 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.825 11:35:21 -- nvmf/common.sh@7 -- # uname -s 00:07:50.825 11:35:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.825 11:35:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.825 11:35:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.825 11:35:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.825 11:35:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.825 11:35:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.825 11:35:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.825 11:35:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.825 11:35:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.825 11:35:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.825 11:35:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:50.825 11:35:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:50.825 11:35:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.825 11:35:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.825 11:35:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.825 11:35:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.825 11:35:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.825 11:35:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.825 11:35:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.825 11:35:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.826 11:35:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.826 11:35:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.826 11:35:21 -- paths/export.sh@5 -- # export PATH 00:07:50.826 11:35:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.826 11:35:21 -- nvmf/common.sh@46 -- # : 0 00:07:50.826 11:35:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:50.826 11:35:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:50.826 11:35:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:50.826 11:35:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.826 11:35:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.826 11:35:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:50.826 11:35:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:50.826 11:35:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:50.826 11:35:21 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:50.826 11:35:21 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:50.826 11:35:21 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:50.826 11:35:21 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:50.826 11:35:21 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:50.826 11:35:21 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:50.826 11:35:21 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:50.826 11:35:21 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:50.826 11:35:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.826 11:35:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.826 11:35:21 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:50.826 11:35:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:50.826 11:35:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.826 11:35:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:50.826 11:35:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:50.826 11:35:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:50.826 11:35:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.826 11:35:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.826 11:35:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.826 11:35:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:50.826 11:35:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:50.826 11:35:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:50.826 11:35:21 -- common/autotest_common.sh@10 -- # set +x 00:07:58.940 11:35:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:58.940 11:35:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:58.940 11:35:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:58.940 11:35:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:58.940 11:35:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:58.940 11:35:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:58.940 11:35:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:58.940 11:35:28 -- nvmf/common.sh@294 -- # net_devs=() 00:07:58.940 11:35:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:58.940 11:35:28 -- nvmf/common.sh@295 -- # e810=() 00:07:58.940 11:35:28 -- nvmf/common.sh@295 -- # local -ga e810 00:07:58.940 11:35:28 -- nvmf/common.sh@296 -- # x722=() 00:07:58.940 11:35:28 -- nvmf/common.sh@296 -- # local -ga x722 00:07:58.940 11:35:28 -- nvmf/common.sh@297 -- # mlx=() 00:07:58.940 11:35:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:58.940 11:35:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.940 11:35:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.941 11:35:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.941 11:35:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:58.941 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:58.941 11:35:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.941 11:35:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:58.941 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:58.941 11:35:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.941 11:35:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.941 11:35:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.941 11:35:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:58.941 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.941 11:35:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.941 11:35:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:58.941 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.941 11:35:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:58.941 11:35:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:58.941 11:35:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:58.941 11:35:28 -- nvmf/common.sh@57 -- # uname 00:07:58.941 11:35:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:58.941 11:35:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:58.941 11:35:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:58.941 11:35:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:58.941 11:35:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:58.941 11:35:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:58.941 11:35:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:58.941 11:35:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:58.941 11:35:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:58.941 11:35:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:58.941 11:35:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:58.941 11:35:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.941 11:35:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:58.941 11:35:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:58.941 11:35:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.941 11:35:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@104 -- # continue 2 00:07:58.941 11:35:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@104 -- # continue 2 00:07:58.941 11:35:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:58.941 11:35:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.941 11:35:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:58.941 11:35:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:58.941 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.941 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:58.941 altname enp217s0f0np0 00:07:58.941 altname ens818f0np0 00:07:58.941 inet 192.168.100.8/24 scope global mlx_0_0 00:07:58.941 valid_lft forever preferred_lft forever 00:07:58.941 11:35:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:58.941 11:35:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.941 11:35:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:58.941 11:35:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:58.941 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.941 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:58.941 altname enp217s0f1np1 00:07:58.941 altname ens818f1np1 00:07:58.941 inet 192.168.100.9/24 scope global mlx_0_1 00:07:58.941 valid_lft forever preferred_lft forever 00:07:58.941 11:35:28 -- nvmf/common.sh@410 -- # return 0 00:07:58.941 11:35:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:58.941 11:35:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:58.941 11:35:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:58.941 11:35:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:58.941 11:35:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.941 11:35:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:58.941 11:35:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:58.941 11:35:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.941 11:35:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:58.941 11:35:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@104 -- # continue 2 00:07:58.941 11:35:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.941 11:35:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.941 11:35:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@104 -- # continue 2 00:07:58.941 11:35:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:58.941 11:35:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.941 11:35:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:58.941 11:35:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.941 11:35:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.941 11:35:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:58.941 192.168.100.9' 00:07:58.941 11:35:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:58.941 192.168.100.9' 00:07:58.941 11:35:28 -- nvmf/common.sh@445 -- # head -n 1 00:07:58.941 11:35:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:58.941 11:35:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:58.941 192.168.100.9' 00:07:58.941 11:35:28 -- nvmf/common.sh@446 -- # tail -n +2 00:07:58.941 11:35:28 -- nvmf/common.sh@446 -- # head -n 1 00:07:58.941 11:35:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:58.941 11:35:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:58.941 11:35:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:58.941 11:35:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:58.941 11:35:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:58.941 11:35:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:58.941 11:35:28 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:58.942 11:35:28 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:58.942 11:35:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.942 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:28 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:58.942 11:35:28 -- target/nvmf_example.sh@34 -- # nvmfpid=3601682 00:07:58.942 11:35:28 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:58.942 11:35:28 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.942 11:35:28 -- target/nvmf_example.sh@36 -- # waitforlisten 3601682 00:07:58.942 11:35:28 -- common/autotest_common.sh@829 -- # '[' -z 3601682 ']' 00:07:58.942 11:35:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.942 11:35:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.942 11:35:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.942 11:35:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.942 11:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.942 11:35:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.942 11:35:29 -- common/autotest_common.sh@862 -- # return 0 00:07:58.942 11:35:29 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:58.942 11:35:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:58.942 11:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.942 11:35:29 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:58.942 11:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.942 11:35:29 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:58.942 11:35:29 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.942 11:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.942 11:35:29 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:58.942 11:35:29 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:58.942 11:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.942 11:35:29 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:58.942 11:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.942 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.942 11:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.942 11:35:29 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:58.942 11:35:29 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:59.200 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.462 Initializing NVMe Controllers 00:08:11.462 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.462 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.462 Initialization complete. Launching workers. 00:08:11.462 ======================================================== 00:08:11.462 Latency(us) 00:08:11.462 Device Information : IOPS MiB/s Average min max 00:08:11.462 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 27650.50 108.01 2314.46 594.56 13014.36 00:08:11.462 ======================================================== 00:08:11.462 Total : 27650.50 108.01 2314.46 594.56 13014.36 00:08:11.462 00:08:11.462 11:35:40 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:11.462 11:35:40 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:11.462 11:35:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:11.462 11:35:40 -- nvmf/common.sh@116 -- # sync 00:08:11.462 11:35:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:11.462 11:35:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:11.462 11:35:40 -- nvmf/common.sh@119 -- # set +e 00:08:11.462 11:35:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:11.462 11:35:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:11.462 rmmod nvme_rdma 00:08:11.462 rmmod nvme_fabrics 00:08:11.462 11:35:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:11.462 11:35:40 -- nvmf/common.sh@123 -- # set -e 00:08:11.462 11:35:40 -- nvmf/common.sh@124 -- # return 0 00:08:11.462 11:35:40 -- nvmf/common.sh@477 -- # '[' -n 3601682 ']' 00:08:11.462 11:35:40 -- nvmf/common.sh@478 -- # killprocess 3601682 00:08:11.462 11:35:40 -- common/autotest_common.sh@936 -- # '[' -z 3601682 ']' 00:08:11.462 11:35:40 -- common/autotest_common.sh@940 -- # kill -0 3601682 00:08:11.462 11:35:40 -- common/autotest_common.sh@941 -- # uname 00:08:11.462 11:35:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.462 11:35:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3601682 00:08:11.462 11:35:40 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:11.462 11:35:40 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:11.462 11:35:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3601682' 00:08:11.462 killing process with pid 3601682 00:08:11.462 11:35:40 -- common/autotest_common.sh@955 -- # kill 3601682 00:08:11.462 11:35:40 -- common/autotest_common.sh@960 -- # wait 3601682 00:08:11.462 nvmf threads initialize successfully 00:08:11.463 bdev subsystem init successfully 00:08:11.463 created a nvmf target service 00:08:11.463 create targets's poll groups done 00:08:11.463 all subsystems of target started 00:08:11.463 nvmf target is running 00:08:11.463 all subsystems of target stopped 00:08:11.463 destroy targets's poll groups done 00:08:11.463 destroyed the nvmf target service 00:08:11.463 bdev subsystem finish successfully 00:08:11.463 nvmf threads destroy successfully 00:08:11.463 11:35:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.463 11:35:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:11.463 11:35:41 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:11.463 11:35:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.463 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.463 00:08:11.463 real 0m20.171s 00:08:11.463 user 0m52.421s 00:08:11.463 sys 0m5.954s 00:08:11.463 11:35:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.463 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.463 ************************************ 00:08:11.463 END TEST nvmf_example 00:08:11.463 ************************************ 00:08:11.463 11:35:41 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:11.463 11:35:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.463 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.463 ************************************ 00:08:11.463 START TEST nvmf_filesystem 00:08:11.463 ************************************ 00:08:11.463 11:35:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:11.463 * Looking for test storage... 00:08:11.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.463 11:35:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.463 11:35:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.463 11:35:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.463 11:35:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.463 11:35:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.463 11:35:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.463 11:35:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.463 11:35:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.463 11:35:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.463 11:35:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.463 11:35:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.463 11:35:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.463 11:35:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.463 11:35:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.463 11:35:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.463 11:35:41 -- scripts/common.sh@344 -- # : 1 00:08:11.463 11:35:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.463 11:35:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.463 11:35:41 -- scripts/common.sh@364 -- # decimal 1 00:08:11.463 11:35:41 -- scripts/common.sh@352 -- # local d=1 00:08:11.463 11:35:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.463 11:35:41 -- scripts/common.sh@354 -- # echo 1 00:08:11.463 11:35:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.463 11:35:41 -- scripts/common.sh@365 -- # decimal 2 00:08:11.463 11:35:41 -- scripts/common.sh@352 -- # local d=2 00:08:11.463 11:35:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.463 11:35:41 -- scripts/common.sh@354 -- # echo 2 00:08:11.463 11:35:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.463 11:35:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.463 11:35:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.463 11:35:41 -- scripts/common.sh@367 -- # return 0 00:08:11.463 11:35:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.463 --rc genhtml_branch_coverage=1 00:08:11.463 --rc genhtml_function_coverage=1 00:08:11.463 --rc genhtml_legend=1 00:08:11.463 --rc geninfo_all_blocks=1 00:08:11.463 --rc geninfo_unexecuted_blocks=1 00:08:11.463 00:08:11.463 ' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.463 --rc genhtml_branch_coverage=1 00:08:11.463 --rc genhtml_function_coverage=1 00:08:11.463 --rc genhtml_legend=1 00:08:11.463 --rc geninfo_all_blocks=1 00:08:11.463 --rc geninfo_unexecuted_blocks=1 00:08:11.463 00:08:11.463 ' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.463 --rc genhtml_branch_coverage=1 00:08:11.463 --rc genhtml_function_coverage=1 00:08:11.463 --rc genhtml_legend=1 00:08:11.463 --rc geninfo_all_blocks=1 00:08:11.463 --rc geninfo_unexecuted_blocks=1 00:08:11.463 00:08:11.463 ' 00:08:11.463 11:35:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.463 --rc genhtml_branch_coverage=1 00:08:11.463 --rc genhtml_function_coverage=1 00:08:11.463 --rc genhtml_legend=1 00:08:11.463 --rc geninfo_all_blocks=1 00:08:11.463 --rc geninfo_unexecuted_blocks=1 00:08:11.463 00:08:11.463 ' 00:08:11.463 11:35:41 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:11.463 11:35:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:11.463 11:35:41 -- common/autotest_common.sh@34 -- # set -e 00:08:11.463 11:35:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:11.463 11:35:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:11.463 11:35:41 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:11.463 11:35:41 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:11.463 11:35:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:11.463 11:35:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:11.463 11:35:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:11.463 11:35:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:11.463 11:35:41 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:11.463 11:35:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:11.463 11:35:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:11.463 11:35:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:11.463 11:35:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:11.463 11:35:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:11.463 11:35:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:11.463 11:35:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:11.463 11:35:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:11.463 11:35:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:11.463 11:35:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:11.463 11:35:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:11.463 11:35:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:11.463 11:35:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:11.463 11:35:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:11.463 11:35:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:11.463 11:35:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:11.463 11:35:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:11.463 11:35:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:11.463 11:35:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:11.463 11:35:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:11.463 11:35:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:11.463 11:35:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:11.463 11:35:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:11.463 11:35:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:11.463 11:35:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:11.463 11:35:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:11.463 11:35:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:11.463 11:35:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:11.463 11:35:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:11.463 11:35:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:11.463 11:35:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:11.464 11:35:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:11.464 11:35:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:11.464 11:35:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:11.464 11:35:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:11.464 11:35:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:11.464 11:35:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:11.464 11:35:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:11.464 11:35:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:11.464 11:35:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:11.464 11:35:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:11.464 11:35:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:11.464 11:35:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:11.464 11:35:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:11.464 11:35:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:11.464 11:35:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:11.464 11:35:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:11.464 11:35:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:11.464 11:35:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:11.464 11:35:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:11.464 11:35:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:11.464 11:35:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:11.464 11:35:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:11.464 11:35:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:11.464 11:35:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:11.464 11:35:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:11.464 11:35:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:11.464 11:35:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:11.464 11:35:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:11.464 11:35:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:11.464 11:35:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:11.464 11:35:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:11.464 11:35:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:11.464 11:35:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:11.464 11:35:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:11.464 11:35:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:11.464 11:35:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:11.464 11:35:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:11.464 11:35:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:11.464 11:35:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:11.464 11:35:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:11.464 11:35:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:11.464 11:35:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:11.464 11:35:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:11.464 11:35:41 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:11.464 11:35:41 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:11.464 11:35:41 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:11.464 11:35:41 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:11.464 11:35:41 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:11.464 11:35:41 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.464 11:35:41 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:11.464 11:35:41 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.464 11:35:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:11.464 11:35:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:11.464 11:35:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:11.464 11:35:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:11.464 11:35:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:11.464 11:35:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:11.464 11:35:41 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:11.464 11:35:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:11.464 #define SPDK_CONFIG_H 00:08:11.464 #define SPDK_CONFIG_APPS 1 00:08:11.464 #define SPDK_CONFIG_ARCH native 00:08:11.464 #undef SPDK_CONFIG_ASAN 00:08:11.464 #undef SPDK_CONFIG_AVAHI 00:08:11.464 #undef SPDK_CONFIG_CET 00:08:11.464 #define SPDK_CONFIG_COVERAGE 1 00:08:11.464 #define SPDK_CONFIG_CROSS_PREFIX 00:08:11.464 #undef SPDK_CONFIG_CRYPTO 00:08:11.464 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:11.464 #undef SPDK_CONFIG_CUSTOMOCF 00:08:11.464 #undef SPDK_CONFIG_DAOS 00:08:11.464 #define SPDK_CONFIG_DAOS_DIR 00:08:11.464 #define SPDK_CONFIG_DEBUG 1 00:08:11.464 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:11.464 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:11.464 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:11.464 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:11.464 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:11.464 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:11.464 #define SPDK_CONFIG_EXAMPLES 1 00:08:11.464 #undef SPDK_CONFIG_FC 00:08:11.464 #define SPDK_CONFIG_FC_PATH 00:08:11.464 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:11.464 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:11.464 #undef SPDK_CONFIG_FUSE 00:08:11.464 #undef SPDK_CONFIG_FUZZER 00:08:11.464 #define SPDK_CONFIG_FUZZER_LIB 00:08:11.464 #undef SPDK_CONFIG_GOLANG 00:08:11.464 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:11.464 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:11.464 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:11.464 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:11.464 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:11.464 #define SPDK_CONFIG_IDXD 1 00:08:11.464 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:11.464 #undef SPDK_CONFIG_IPSEC_MB 00:08:11.464 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:11.464 #define SPDK_CONFIG_ISAL 1 00:08:11.464 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:11.464 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:11.464 #define SPDK_CONFIG_LIBDIR 00:08:11.464 #undef SPDK_CONFIG_LTO 00:08:11.464 #define SPDK_CONFIG_MAX_LCORES 00:08:11.464 #define SPDK_CONFIG_NVME_CUSE 1 00:08:11.464 #undef SPDK_CONFIG_OCF 00:08:11.464 #define SPDK_CONFIG_OCF_PATH 00:08:11.464 #define SPDK_CONFIG_OPENSSL_PATH 00:08:11.464 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:11.464 #undef SPDK_CONFIG_PGO_USE 00:08:11.464 #define SPDK_CONFIG_PREFIX /usr/local 00:08:11.464 #undef SPDK_CONFIG_RAID5F 00:08:11.464 #undef SPDK_CONFIG_RBD 00:08:11.464 #define SPDK_CONFIG_RDMA 1 00:08:11.464 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:11.464 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:11.464 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:11.464 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:11.464 #define SPDK_CONFIG_SHARED 1 00:08:11.464 #undef SPDK_CONFIG_SMA 00:08:11.464 #define SPDK_CONFIG_TESTS 1 00:08:11.464 #undef SPDK_CONFIG_TSAN 00:08:11.464 #define SPDK_CONFIG_UBLK 1 00:08:11.464 #define SPDK_CONFIG_UBSAN 1 00:08:11.464 #undef SPDK_CONFIG_UNIT_TESTS 00:08:11.464 #undef SPDK_CONFIG_URING 00:08:11.464 #define SPDK_CONFIG_URING_PATH 00:08:11.464 #undef SPDK_CONFIG_URING_ZNS 00:08:11.464 #undef SPDK_CONFIG_USDT 00:08:11.464 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:11.464 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:11.464 #undef SPDK_CONFIG_VFIO_USER 00:08:11.464 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:11.464 #define SPDK_CONFIG_VHOST 1 00:08:11.464 #define SPDK_CONFIG_VIRTIO 1 00:08:11.464 #undef SPDK_CONFIG_VTUNE 00:08:11.464 #define SPDK_CONFIG_VTUNE_DIR 00:08:11.464 #define SPDK_CONFIG_WERROR 1 00:08:11.464 #define SPDK_CONFIG_WPDK_DIR 00:08:11.464 #undef SPDK_CONFIG_XNVME 00:08:11.464 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:11.464 11:35:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:11.464 11:35:41 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.464 11:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.464 11:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.464 11:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.464 11:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.465 11:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.465 11:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.465 11:35:41 -- paths/export.sh@5 -- # export PATH 00:08:11.465 11:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.465 11:35:41 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.465 11:35:41 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.465 11:35:41 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:11.465 11:35:41 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:11.465 11:35:41 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:11.465 11:35:41 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:11.465 11:35:41 -- pm/common@16 -- # TEST_TAG=N/A 00:08:11.465 11:35:41 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:11.465 11:35:41 -- common/autotest_common.sh@52 -- # : 1 00:08:11.465 11:35:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:11.465 11:35:41 -- common/autotest_common.sh@56 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:11.465 11:35:41 -- common/autotest_common.sh@58 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:11.465 11:35:41 -- common/autotest_common.sh@60 -- # : 1 00:08:11.465 11:35:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:11.465 11:35:41 -- common/autotest_common.sh@62 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:11.465 11:35:41 -- common/autotest_common.sh@64 -- # : 00:08:11.465 11:35:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:11.465 11:35:41 -- common/autotest_common.sh@66 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:11.465 11:35:41 -- common/autotest_common.sh@68 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:11.465 11:35:41 -- common/autotest_common.sh@70 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:11.465 11:35:41 -- common/autotest_common.sh@72 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:11.465 11:35:41 -- common/autotest_common.sh@74 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:11.465 11:35:41 -- common/autotest_common.sh@76 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:11.465 11:35:41 -- common/autotest_common.sh@78 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:11.465 11:35:41 -- common/autotest_common.sh@80 -- # : 1 00:08:11.465 11:35:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:11.465 11:35:41 -- common/autotest_common.sh@82 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:11.465 11:35:41 -- common/autotest_common.sh@84 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:11.465 11:35:41 -- common/autotest_common.sh@86 -- # : 1 00:08:11.465 11:35:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:11.465 11:35:41 -- common/autotest_common.sh@88 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:11.465 11:35:41 -- common/autotest_common.sh@90 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:11.465 11:35:41 -- common/autotest_common.sh@92 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:11.465 11:35:41 -- common/autotest_common.sh@94 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:11.465 11:35:41 -- common/autotest_common.sh@96 -- # : rdma 00:08:11.465 11:35:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:11.465 11:35:41 -- common/autotest_common.sh@98 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:11.465 11:35:41 -- common/autotest_common.sh@100 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:11.465 11:35:41 -- common/autotest_common.sh@102 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:11.465 11:35:41 -- common/autotest_common.sh@104 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:11.465 11:35:41 -- common/autotest_common.sh@106 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:11.465 11:35:41 -- common/autotest_common.sh@108 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:11.465 11:35:41 -- common/autotest_common.sh@110 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:11.465 11:35:41 -- common/autotest_common.sh@112 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:11.465 11:35:41 -- common/autotest_common.sh@114 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:11.465 11:35:41 -- common/autotest_common.sh@116 -- # : 1 00:08:11.465 11:35:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:11.465 11:35:41 -- common/autotest_common.sh@118 -- # : 00:08:11.465 11:35:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:11.465 11:35:41 -- common/autotest_common.sh@120 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:11.465 11:35:41 -- common/autotest_common.sh@122 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:11.465 11:35:41 -- common/autotest_common.sh@124 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:11.465 11:35:41 -- common/autotest_common.sh@126 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:11.465 11:35:41 -- common/autotest_common.sh@128 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:11.465 11:35:41 -- common/autotest_common.sh@130 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:11.465 11:35:41 -- common/autotest_common.sh@132 -- # : 00:08:11.465 11:35:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:11.465 11:35:41 -- common/autotest_common.sh@134 -- # : true 00:08:11.465 11:35:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:11.465 11:35:41 -- common/autotest_common.sh@136 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:11.465 11:35:41 -- common/autotest_common.sh@138 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:11.465 11:35:41 -- common/autotest_common.sh@140 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:11.465 11:35:41 -- common/autotest_common.sh@142 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:11.465 11:35:41 -- common/autotest_common.sh@144 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:11.465 11:35:41 -- common/autotest_common.sh@146 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:11.465 11:35:41 -- common/autotest_common.sh@148 -- # : mlx5 00:08:11.465 11:35:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:11.465 11:35:41 -- common/autotest_common.sh@150 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:11.465 11:35:41 -- common/autotest_common.sh@152 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:11.465 11:35:41 -- common/autotest_common.sh@154 -- # : 0 00:08:11.465 11:35:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:11.465 11:35:41 -- common/autotest_common.sh@156 -- # : 0 00:08:11.466 11:35:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:11.466 11:35:41 -- common/autotest_common.sh@158 -- # : 0 00:08:11.466 11:35:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:11.466 11:35:41 -- common/autotest_common.sh@160 -- # : 0 00:08:11.466 11:35:41 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:11.466 11:35:41 -- common/autotest_common.sh@163 -- # : 00:08:11.466 11:35:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:11.466 11:35:41 -- common/autotest_common.sh@165 -- # : 0 00:08:11.466 11:35:41 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:11.466 11:35:41 -- common/autotest_common.sh@167 -- # : 0 00:08:11.466 11:35:41 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:11.466 11:35:41 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.466 11:35:41 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:11.466 11:35:41 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:11.466 11:35:41 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:11.466 11:35:41 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:11.466 11:35:41 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.466 11:35:41 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.466 11:35:41 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.466 11:35:41 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.466 11:35:41 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:11.466 11:35:41 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:11.466 11:35:41 -- common/autotest_common.sh@196 -- # cat 00:08:11.466 11:35:41 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:11.466 11:35:41 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.466 11:35:41 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.466 11:35:41 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.466 11:35:41 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.466 11:35:41 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:11.466 11:35:41 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:11.466 11:35:41 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.466 11:35:41 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.466 11:35:41 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.466 11:35:41 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.466 11:35:41 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.466 11:35:41 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.466 11:35:41 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.466 11:35:41 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.466 11:35:41 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.466 11:35:41 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.466 11:35:41 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:11.466 11:35:41 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:11.466 11:35:41 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:11.466 11:35:41 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:11.466 11:35:41 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:11.466 11:35:41 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:11.466 11:35:41 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:11.466 11:35:41 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:11.466 11:35:41 -- common/autotest_common.sh@259 -- # valgrind= 00:08:11.466 11:35:41 -- common/autotest_common.sh@265 -- # uname -s 00:08:11.466 11:35:41 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:11.466 11:35:41 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:11.466 11:35:41 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:11.466 11:35:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:11.466 11:35:41 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:11.466 11:35:41 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:11.466 11:35:41 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:11.466 11:35:41 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:11.466 11:35:41 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:11.466 11:35:41 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:11.466 11:35:41 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:11.466 11:35:41 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:11.466 11:35:41 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:11.466 11:35:41 -- common/autotest_common.sh@319 -- # [[ -z 3603929 ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@319 -- # kill -0 3603929 00:08:11.466 11:35:41 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:11.466 11:35:41 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:11.466 11:35:41 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:11.466 11:35:41 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:11.466 11:35:41 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:11.466 11:35:41 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:11.466 11:35:41 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:11.466 11:35:41 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:11.466 11:35:41 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.zuNDl5 00:08:11.467 11:35:41 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:11.467 11:35:41 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zuNDl5/tests/target /tmp/spdk.zuNDl5 00:08:11.467 11:35:41 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@328 -- # df -T 00:08:11.467 11:35:41 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=55757840384 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730598912 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=5972758528 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30815830016 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=49467392 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336680960 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346122240 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=9441280 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864883712 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865301504 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=417792 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:11.467 11:35:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:11.467 11:35:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:11.467 11:35:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.467 11:35:41 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:11.467 * Looking for test storage... 00:08:11.467 11:35:41 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:11.467 11:35:41 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:11.467 11:35:41 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.467 11:35:41 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:11.467 11:35:41 -- common/autotest_common.sh@373 -- # mount=/ 00:08:11.467 11:35:41 -- common/autotest_common.sh@375 -- # target_space=55757840384 00:08:11.467 11:35:41 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:11.467 11:35:41 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:11.467 11:35:41 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@382 -- # new_size=8187351040 00:08:11.467 11:35:41 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:11.467 11:35:41 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.467 11:35:41 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.467 11:35:41 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.467 11:35:41 -- common/autotest_common.sh@390 -- # return 0 00:08:11.467 11:35:41 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:11.467 11:35:41 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:11.467 11:35:41 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:11.467 11:35:41 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:11.467 11:35:41 -- common/autotest_common.sh@1682 -- # true 00:08:11.467 11:35:41 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:11.467 11:35:41 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@27 -- # exec 00:08:11.467 11:35:41 -- common/autotest_common.sh@29 -- # exec 00:08:11.467 11:35:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:11.467 11:35:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:11.467 11:35:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:11.467 11:35:41 -- common/autotest_common.sh@18 -- # set -x 00:08:11.467 11:35:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.467 11:35:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.467 11:35:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.467 11:35:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.467 11:35:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.467 11:35:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.467 11:35:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.467 11:35:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.467 11:35:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.467 11:35:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.467 11:35:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.467 11:35:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.467 11:35:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.467 11:35:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.467 11:35:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.467 11:35:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.467 11:35:41 -- scripts/common.sh@344 -- # : 1 00:08:11.467 11:35:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.467 11:35:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.467 11:35:41 -- scripts/common.sh@364 -- # decimal 1 00:08:11.467 11:35:41 -- scripts/common.sh@352 -- # local d=1 00:08:11.467 11:35:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.467 11:35:41 -- scripts/common.sh@354 -- # echo 1 00:08:11.467 11:35:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.467 11:35:41 -- scripts/common.sh@365 -- # decimal 2 00:08:11.467 11:35:41 -- scripts/common.sh@352 -- # local d=2 00:08:11.467 11:35:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.467 11:35:41 -- scripts/common.sh@354 -- # echo 2 00:08:11.467 11:35:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.467 11:35:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.467 11:35:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.467 11:35:41 -- scripts/common.sh@367 -- # return 0 00:08:11.467 11:35:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.467 11:35:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.467 --rc genhtml_branch_coverage=1 00:08:11.467 --rc genhtml_function_coverage=1 00:08:11.467 --rc genhtml_legend=1 00:08:11.467 --rc geninfo_all_blocks=1 00:08:11.467 --rc geninfo_unexecuted_blocks=1 00:08:11.468 00:08:11.468 ' 00:08:11.468 11:35:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.468 --rc genhtml_branch_coverage=1 00:08:11.468 --rc genhtml_function_coverage=1 00:08:11.468 --rc genhtml_legend=1 00:08:11.468 --rc geninfo_all_blocks=1 00:08:11.468 --rc geninfo_unexecuted_blocks=1 00:08:11.468 00:08:11.468 ' 00:08:11.468 11:35:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.468 --rc genhtml_branch_coverage=1 00:08:11.468 --rc genhtml_function_coverage=1 00:08:11.468 --rc genhtml_legend=1 00:08:11.468 --rc geninfo_all_blocks=1 00:08:11.468 --rc geninfo_unexecuted_blocks=1 00:08:11.468 00:08:11.468 ' 00:08:11.468 11:35:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.468 --rc genhtml_branch_coverage=1 00:08:11.468 --rc genhtml_function_coverage=1 00:08:11.468 --rc genhtml_legend=1 00:08:11.468 --rc geninfo_all_blocks=1 00:08:11.468 --rc geninfo_unexecuted_blocks=1 00:08:11.468 00:08:11.468 ' 00:08:11.468 11:35:41 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.468 11:35:41 -- nvmf/common.sh@7 -- # uname -s 00:08:11.468 11:35:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.468 11:35:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.468 11:35:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.468 11:35:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.468 11:35:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.468 11:35:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.468 11:35:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.468 11:35:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.468 11:35:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.468 11:35:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.468 11:35:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:11.468 11:35:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:11.468 11:35:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.468 11:35:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.468 11:35:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.468 11:35:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.468 11:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.468 11:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.468 11:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.468 11:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.468 11:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.468 11:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.468 11:35:41 -- paths/export.sh@5 -- # export PATH 00:08:11.468 11:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.468 11:35:41 -- nvmf/common.sh@46 -- # : 0 00:08:11.468 11:35:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.468 11:35:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.468 11:35:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.468 11:35:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.468 11:35:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.468 11:35:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.468 11:35:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.468 11:35:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.468 11:35:41 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:11.468 11:35:41 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:11.468 11:35:41 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:11.468 11:35:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:11.468 11:35:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.468 11:35:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.468 11:35:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.468 11:35:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.468 11:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.468 11:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.468 11:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.468 11:35:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:11.468 11:35:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:11.468 11:35:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:11.468 11:35:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.063 11:35:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:18.063 11:35:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:18.063 11:35:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:18.063 11:35:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:18.063 11:35:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:18.063 11:35:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:18.063 11:35:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:18.063 11:35:47 -- nvmf/common.sh@294 -- # net_devs=() 00:08:18.063 11:35:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:18.063 11:35:47 -- nvmf/common.sh@295 -- # e810=() 00:08:18.063 11:35:47 -- nvmf/common.sh@295 -- # local -ga e810 00:08:18.063 11:35:47 -- nvmf/common.sh@296 -- # x722=() 00:08:18.063 11:35:47 -- nvmf/common.sh@296 -- # local -ga x722 00:08:18.063 11:35:47 -- nvmf/common.sh@297 -- # mlx=() 00:08:18.063 11:35:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:18.063 11:35:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.063 11:35:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:18.063 11:35:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.063 11:35:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:18.063 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:18.063 11:35:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.063 11:35:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.063 11:35:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:18.063 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:18.063 11:35:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.063 11:35:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:18.063 11:35:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.063 11:35:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.063 11:35:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.063 11:35:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.063 11:35:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:18.063 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:18.063 11:35:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.063 11:35:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.063 11:35:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.063 11:35:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.063 11:35:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:18.063 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:18.063 11:35:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.063 11:35:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:18.063 11:35:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:18.063 11:35:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:18.063 11:35:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:18.063 11:35:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:18.063 11:35:47 -- nvmf/common.sh@57 -- # uname 00:08:18.063 11:35:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:18.063 11:35:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:18.063 11:35:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:18.063 11:35:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:18.063 11:35:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:18.063 11:35:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:18.063 11:35:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:18.063 11:35:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:18.063 11:35:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:18.063 11:35:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:18.063 11:35:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:18.063 11:35:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.064 11:35:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:18.064 11:35:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:18.064 11:35:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.064 11:35:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:18.064 11:35:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@104 -- # continue 2 00:08:18.064 11:35:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@104 -- # continue 2 00:08:18.064 11:35:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 11:35:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.064 11:35:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:18.064 11:35:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:18.064 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.064 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:18.064 altname enp217s0f0np0 00:08:18.064 altname ens818f0np0 00:08:18.064 inet 192.168.100.8/24 scope global mlx_0_0 00:08:18.064 valid_lft forever preferred_lft forever 00:08:18.064 11:35:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 11:35:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.064 11:35:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:18.064 11:35:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:18.064 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.064 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:18.064 altname enp217s0f1np1 00:08:18.064 altname ens818f1np1 00:08:18.064 inet 192.168.100.9/24 scope global mlx_0_1 00:08:18.064 valid_lft forever preferred_lft forever 00:08:18.064 11:35:48 -- nvmf/common.sh@410 -- # return 0 00:08:18.064 11:35:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:18.064 11:35:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:18.064 11:35:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:18.064 11:35:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:18.064 11:35:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.064 11:35:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:18.064 11:35:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:18.064 11:35:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.064 11:35:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:18.064 11:35:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@104 -- # continue 2 00:08:18.064 11:35:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.064 11:35:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.064 11:35:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@104 -- # continue 2 00:08:18.064 11:35:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 11:35:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.064 11:35:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:18.064 11:35:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:18.064 11:35:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:18.064 11:35:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 11:35:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 11:35:48 -- nvmf/common.sh@445 -- # head -n 1 00:08:18.064 11:35:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:18.064 11:35:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:18.064 192.168.100.9' 00:08:18.064 11:35:48 -- nvmf/common.sh@446 -- # tail -n +2 00:08:18.064 11:35:48 -- nvmf/common.sh@446 -- # head -n 1 00:08:18.064 11:35:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:18.064 11:35:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:18.064 11:35:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:18.064 11:35:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:18.064 11:35:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:18.064 11:35:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:18.064 11:35:48 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:18.064 11:35:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.064 11:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.064 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.064 ************************************ 00:08:18.064 START TEST nvmf_filesystem_no_in_capsule 00:08:18.064 ************************************ 00:08:18.064 11:35:48 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:18.064 11:35:48 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:18.064 11:35:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:18.064 11:35:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:18.064 11:35:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.064 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.064 11:35:48 -- nvmf/common.sh@469 -- # nvmfpid=3607250 00:08:18.064 11:35:48 -- nvmf/common.sh@470 -- # waitforlisten 3607250 00:08:18.064 11:35:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.064 11:35:48 -- common/autotest_common.sh@829 -- # '[' -z 3607250 ']' 00:08:18.065 11:35:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.065 11:35:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.065 11:35:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.065 11:35:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.065 11:35:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.065 [2024-12-03 11:35:48.321875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.065 [2024-12-03 11:35:48.321934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.065 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.065 [2024-12-03 11:35:48.393004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.065 [2024-12-03 11:35:48.469530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.065 [2024-12-03 11:35:48.469634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.065 [2024-12-03 11:35:48.469644] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.065 [2024-12-03 11:35:48.469655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.065 [2024-12-03 11:35:48.469695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.065 [2024-12-03 11:35:48.469793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.065 [2024-12-03 11:35:48.469868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.065 [2024-12-03 11:35:48.469870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.631 11:35:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.631 11:35:49 -- common/autotest_common.sh@862 -- # return 0 00:08:18.631 11:35:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.631 11:35:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.631 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.631 11:35:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.631 11:35:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.631 11:35:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:18.631 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.631 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.631 [2024-12-03 11:35:49.193425] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:18.631 [2024-12-03 11:35:49.214667] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b49090/0x1b4d580) succeed. 00:08:18.631 [2024-12-03 11:35:49.223816] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b4a680/0x1b8ec20) succeed. 00:08:18.889 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.889 11:35:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.890 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.890 11:35:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.890 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.890 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.890 11:35:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.890 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.890 11:35:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:18.890 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.890 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 [2024-12-03 11:35:49.463878] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:18.890 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.890 11:35:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:18.890 11:35:49 -- common/autotest_common.sh@1369 -- # local bs 00:08:18.890 11:35:49 -- common/autotest_common.sh@1370 -- # local nb 00:08:18.890 11:35:49 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.890 11:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.890 11:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 11:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.890 11:35:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:18.890 { 00:08:18.890 "name": "Malloc1", 00:08:18.890 "aliases": [ 00:08:18.890 "f076be40-9a23-4831-9776-f3282889a1a8" 00:08:18.890 ], 00:08:18.890 "product_name": "Malloc disk", 00:08:18.890 "block_size": 512, 00:08:18.890 "num_blocks": 1048576, 00:08:18.890 "uuid": "f076be40-9a23-4831-9776-f3282889a1a8", 00:08:18.890 "assigned_rate_limits": { 00:08:18.890 "rw_ios_per_sec": 0, 00:08:18.890 "rw_mbytes_per_sec": 0, 00:08:18.890 "r_mbytes_per_sec": 0, 00:08:18.890 "w_mbytes_per_sec": 0 00:08:18.890 }, 00:08:18.890 "claimed": true, 00:08:18.890 "claim_type": "exclusive_write", 00:08:18.890 "zoned": false, 00:08:18.890 "supported_io_types": { 00:08:18.890 "read": true, 00:08:18.890 "write": true, 00:08:18.890 "unmap": true, 00:08:18.890 "write_zeroes": true, 00:08:18.890 "flush": true, 00:08:18.890 "reset": true, 00:08:18.890 "compare": false, 00:08:18.890 "compare_and_write": false, 00:08:18.890 "abort": true, 00:08:18.890 "nvme_admin": false, 00:08:18.890 "nvme_io": false 00:08:18.890 }, 00:08:18.890 "memory_domains": [ 00:08:18.890 { 00:08:18.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.890 "dma_device_type": 2 00:08:18.890 } 00:08:18.890 ], 00:08:18.890 "driver_specific": {} 00:08:18.890 } 00:08:18.890 ]' 00:08:18.890 11:35:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:19.148 11:35:49 -- common/autotest_common.sh@1372 -- # bs=512 00:08:19.148 11:35:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:19.148 11:35:49 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:19.148 11:35:49 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:19.148 11:35:49 -- common/autotest_common.sh@1377 -- # echo 512 00:08:19.148 11:35:49 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.148 11:35:49 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:20.082 11:35:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.082 11:35:50 -- common/autotest_common.sh@1187 -- # local i=0 00:08:20.082 11:35:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.082 11:35:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:20.082 11:35:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:21.984 11:35:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:21.984 11:35:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:21.984 11:35:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.242 11:35:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:22.242 11:35:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.242 11:35:52 -- common/autotest_common.sh@1197 -- # return 0 00:08:22.242 11:35:52 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.242 11:35:52 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.242 11:35:52 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.242 11:35:52 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.242 11:35:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.242 11:35:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.242 11:35:52 -- setup/common.sh@80 -- # echo 536870912 00:08:22.242 11:35:52 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.242 11:35:52 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.242 11:35:52 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.242 11:35:52 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.242 11:35:52 -- target/filesystem.sh@69 -- # partprobe 00:08:22.242 11:35:52 -- target/filesystem.sh@70 -- # sleep 1 00:08:23.177 11:35:53 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:23.177 11:35:53 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:23.177 11:35:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.177 11:35:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.177 11:35:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.435 ************************************ 00:08:23.435 START TEST filesystem_ext4 00:08:23.435 ************************************ 00:08:23.435 11:35:53 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:23.435 11:35:53 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:23.435 11:35:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.435 11:35:53 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:23.435 11:35:53 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:23.435 11:35:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:23.435 11:35:53 -- common/autotest_common.sh@914 -- # local i=0 00:08:23.435 11:35:53 -- common/autotest_common.sh@915 -- # local force 00:08:23.435 11:35:53 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:23.435 11:35:53 -- common/autotest_common.sh@918 -- # force=-F 00:08:23.435 11:35:53 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:23.435 mke2fs 1.47.0 (5-Feb-2023) 00:08:23.435 Discarding device blocks: 0/522240 done 00:08:23.435 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.435 Filesystem UUID: 563b8dc8-afa0-46ea-9b04-4a3665a19119 00:08:23.435 Superblock backups stored on blocks: 00:08:23.435 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.435 00:08:23.435 Allocating group tables: 0/64 done 00:08:23.435 Writing inode tables: 0/64 done 00:08:23.435 Creating journal (8192 blocks): done 00:08:23.435 Writing superblocks and filesystem accounting information: 0/64 done 00:08:23.435 00:08:23.435 11:35:53 -- common/autotest_common.sh@931 -- # return 0 00:08:23.435 11:35:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.435 11:35:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.435 11:35:53 -- target/filesystem.sh@25 -- # sync 00:08:23.435 11:35:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.435 11:35:53 -- target/filesystem.sh@27 -- # sync 00:08:23.435 11:35:53 -- target/filesystem.sh@29 -- # i=0 00:08:23.435 11:35:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.435 11:35:53 -- target/filesystem.sh@37 -- # kill -0 3607250 00:08:23.435 11:35:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.435 11:35:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.435 11:35:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.435 11:35:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.435 00:08:23.435 real 0m0.202s 00:08:23.435 user 0m0.040s 00:08:23.435 sys 0m0.070s 00:08:23.435 11:35:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.435 11:35:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.435 ************************************ 00:08:23.435 END TEST filesystem_ext4 00:08:23.435 ************************************ 00:08:23.435 11:35:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:23.435 11:35:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.435 11:35:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.435 11:35:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.435 ************************************ 00:08:23.435 START TEST filesystem_btrfs 00:08:23.435 ************************************ 00:08:23.694 11:35:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:23.694 11:35:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:23.694 11:35:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.694 11:35:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:23.694 11:35:54 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:23.694 11:35:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:23.694 11:35:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:23.694 11:35:54 -- common/autotest_common.sh@915 -- # local force 00:08:23.694 11:35:54 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:23.694 11:35:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:23.694 11:35:54 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:23.694 btrfs-progs v6.8.1 00:08:23.694 See https://btrfs.readthedocs.io for more information. 00:08:23.694 00:08:23.694 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:23.694 NOTE: several default settings have changed in version 5.15, please make sure 00:08:23.694 this does not affect your deployments: 00:08:23.694 - DUP for metadata (-m dup) 00:08:23.694 - enabled no-holes (-O no-holes) 00:08:23.694 - enabled free-space-tree (-R free-space-tree) 00:08:23.694 00:08:23.694 Label: (null) 00:08:23.694 UUID: e3d69a20-8685-42dd-b18a-c243097c3bbc 00:08:23.694 Node size: 16384 00:08:23.694 Sector size: 4096 (CPU page size: 4096) 00:08:23.694 Filesystem size: 510.00MiB 00:08:23.694 Block group profiles: 00:08:23.694 Data: single 8.00MiB 00:08:23.694 Metadata: DUP 32.00MiB 00:08:23.694 System: DUP 8.00MiB 00:08:23.694 SSD detected: yes 00:08:23.694 Zoned device: no 00:08:23.694 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:23.694 Checksum: crc32c 00:08:23.694 Number of devices: 1 00:08:23.694 Devices: 00:08:23.694 ID SIZE PATH 00:08:23.694 1 510.00MiB /dev/nvme0n1p1 00:08:23.694 00:08:23.694 11:35:54 -- common/autotest_common.sh@931 -- # return 0 00:08:23.694 11:35:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.694 11:35:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.694 11:35:54 -- target/filesystem.sh@25 -- # sync 00:08:23.694 11:35:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.694 11:35:54 -- target/filesystem.sh@27 -- # sync 00:08:23.694 11:35:54 -- target/filesystem.sh@29 -- # i=0 00:08:23.694 11:35:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.694 11:35:54 -- target/filesystem.sh@37 -- # kill -0 3607250 00:08:23.694 11:35:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.694 11:35:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.694 11:35:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.694 11:35:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.694 00:08:23.694 real 0m0.249s 00:08:23.694 user 0m0.031s 00:08:23.694 sys 0m0.128s 00:08:23.694 11:35:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.694 11:35:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.694 ************************************ 00:08:23.694 END TEST filesystem_btrfs 00:08:23.694 ************************************ 00:08:23.953 11:35:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:23.953 11:35:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.953 11:35:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.953 11:35:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.953 ************************************ 00:08:23.953 START TEST filesystem_xfs 00:08:23.953 ************************************ 00:08:23.953 11:35:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:23.953 11:35:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:23.953 11:35:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.953 11:35:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:23.953 11:35:54 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:23.953 11:35:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:23.953 11:35:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:23.953 11:35:54 -- common/autotest_common.sh@915 -- # local force 00:08:23.953 11:35:54 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:23.953 11:35:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:23.953 11:35:54 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:23.953 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:23.953 = sectsz=512 attr=2, projid32bit=1 00:08:23.953 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:23.953 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:23.953 data = bsize=4096 blocks=130560, imaxpct=25 00:08:23.953 = sunit=0 swidth=0 blks 00:08:23.953 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:23.953 log =internal log bsize=4096 blocks=16384, version=2 00:08:23.953 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:23.953 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:23.953 Discarding blocks...Done. 00:08:23.953 11:35:54 -- common/autotest_common.sh@931 -- # return 0 00:08:23.953 11:35:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.953 11:35:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.953 11:35:54 -- target/filesystem.sh@25 -- # sync 00:08:23.953 11:35:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.953 11:35:54 -- target/filesystem.sh@27 -- # sync 00:08:23.953 11:35:54 -- target/filesystem.sh@29 -- # i=0 00:08:23.953 11:35:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.953 11:35:54 -- target/filesystem.sh@37 -- # kill -0 3607250 00:08:23.953 11:35:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.953 11:35:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.953 11:35:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.953 11:35:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.953 00:08:23.953 real 0m0.207s 00:08:23.953 user 0m0.034s 00:08:23.953 sys 0m0.077s 00:08:23.953 11:35:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.953 11:35:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.953 ************************************ 00:08:23.953 END TEST filesystem_xfs 00:08:23.953 ************************************ 00:08:24.212 11:35:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:24.212 11:35:54 -- target/filesystem.sh@93 -- # sync 00:08:24.212 11:35:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.145 11:35:55 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.145 11:35:55 -- common/autotest_common.sh@1208 -- # local i=0 00:08:25.145 11:35:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:25.145 11:35:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.145 11:35:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:25.145 11:35:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.145 11:35:55 -- common/autotest_common.sh@1220 -- # return 0 00:08:25.145 11:35:55 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.145 11:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.145 11:35:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.145 11:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.145 11:35:55 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:25.145 11:35:55 -- target/filesystem.sh@101 -- # killprocess 3607250 00:08:25.145 11:35:55 -- common/autotest_common.sh@936 -- # '[' -z 3607250 ']' 00:08:25.145 11:35:55 -- common/autotest_common.sh@940 -- # kill -0 3607250 00:08:25.145 11:35:55 -- common/autotest_common.sh@941 -- # uname 00:08:25.145 11:35:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:25.145 11:35:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3607250 00:08:25.145 11:35:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:25.145 11:35:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:25.145 11:35:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3607250' 00:08:25.145 killing process with pid 3607250 00:08:25.145 11:35:55 -- common/autotest_common.sh@955 -- # kill 3607250 00:08:25.145 11:35:55 -- common/autotest_common.sh@960 -- # wait 3607250 00:08:25.711 11:35:56 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.711 00:08:25.711 real 0m7.837s 00:08:25.711 user 0m30.498s 00:08:25.711 sys 0m1.153s 00:08:25.711 11:35:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.711 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.711 ************************************ 00:08:25.711 END TEST nvmf_filesystem_no_in_capsule 00:08:25.711 ************************************ 00:08:25.711 11:35:56 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:25.711 11:35:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.711 11:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.711 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.711 ************************************ 00:08:25.711 START TEST nvmf_filesystem_in_capsule 00:08:25.711 ************************************ 00:08:25.711 11:35:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:25.711 11:35:56 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:25.711 11:35:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.711 11:35:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.711 11:35:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.711 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.711 11:35:56 -- nvmf/common.sh@469 -- # nvmfpid=3608874 00:08:25.711 11:35:56 -- nvmf/common.sh@470 -- # waitforlisten 3608874 00:08:25.711 11:35:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.711 11:35:56 -- common/autotest_common.sh@829 -- # '[' -z 3608874 ']' 00:08:25.711 11:35:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.711 11:35:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.711 11:35:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.711 11:35:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.711 11:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.711 [2024-12-03 11:35:56.208619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.711 [2024-12-03 11:35:56.208671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.711 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.711 [2024-12-03 11:35:56.276010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.968 [2024-12-03 11:35:56.345172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.968 [2024-12-03 11:35:56.345283] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.968 [2024-12-03 11:35:56.345292] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.968 [2024-12-03 11:35:56.345301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.968 [2024-12-03 11:35:56.345346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.968 [2024-12-03 11:35:56.345442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.968 [2024-12-03 11:35:56.345527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.968 [2024-12-03 11:35:56.345528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.533 11:35:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.533 11:35:57 -- common/autotest_common.sh@862 -- # return 0 00:08:26.533 11:35:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:26.533 11:35:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.533 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.533 11:35:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.533 11:35:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:26.533 11:35:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:26.533 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.533 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.533 [2024-12-03 11:35:57.091748] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5d3090/0x5d7580) succeed. 00:08:26.533 [2024-12-03 11:35:57.100982] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5d4680/0x618c20) succeed. 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.790 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.790 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.790 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.790 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.790 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.790 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 [2024-12-03 11:35:57.368119] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:26.790 11:35:57 -- common/autotest_common.sh@1369 -- # local bs 00:08:26.790 11:35:57 -- common/autotest_common.sh@1370 -- # local nb 00:08:26.790 11:35:57 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:26.790 11:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.790 11:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.790 11:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.790 11:35:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:26.790 { 00:08:26.790 "name": "Malloc1", 00:08:26.790 "aliases": [ 00:08:26.790 "02cfc11e-67ad-4fd5-8a58-bab94dede79f" 00:08:26.790 ], 00:08:26.790 "product_name": "Malloc disk", 00:08:26.790 "block_size": 512, 00:08:26.790 "num_blocks": 1048576, 00:08:26.790 "uuid": "02cfc11e-67ad-4fd5-8a58-bab94dede79f", 00:08:26.790 "assigned_rate_limits": { 00:08:26.790 "rw_ios_per_sec": 0, 00:08:26.790 "rw_mbytes_per_sec": 0, 00:08:26.790 "r_mbytes_per_sec": 0, 00:08:26.790 "w_mbytes_per_sec": 0 00:08:26.790 }, 00:08:26.790 "claimed": true, 00:08:26.790 "claim_type": "exclusive_write", 00:08:26.790 "zoned": false, 00:08:26.790 "supported_io_types": { 00:08:26.790 "read": true, 00:08:26.790 "write": true, 00:08:26.790 "unmap": true, 00:08:26.790 "write_zeroes": true, 00:08:26.790 "flush": true, 00:08:26.790 "reset": true, 00:08:26.790 "compare": false, 00:08:26.790 "compare_and_write": false, 00:08:26.790 "abort": true, 00:08:26.790 "nvme_admin": false, 00:08:26.790 "nvme_io": false 00:08:26.790 }, 00:08:26.790 "memory_domains": [ 00:08:26.790 { 00:08:26.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.790 "dma_device_type": 2 00:08:26.790 } 00:08:26.790 ], 00:08:26.790 "driver_specific": {} 00:08:26.790 } 00:08:26.790 ]' 00:08:26.790 11:35:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:27.047 11:35:57 -- common/autotest_common.sh@1372 -- # bs=512 00:08:27.047 11:35:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:27.047 11:35:57 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:27.047 11:35:57 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:27.047 11:35:57 -- common/autotest_common.sh@1377 -- # echo 512 00:08:27.047 11:35:57 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:27.047 11:35:57 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:27.978 11:35:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.978 11:35:58 -- common/autotest_common.sh@1187 -- # local i=0 00:08:27.978 11:35:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.978 11:35:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:27.978 11:35:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:29.876 11:36:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:29.876 11:36:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:29.876 11:36:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.876 11:36:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:29.876 11:36:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.876 11:36:00 -- common/autotest_common.sh@1197 -- # return 0 00:08:29.876 11:36:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:29.876 11:36:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:30.134 11:36:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:30.134 11:36:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:30.134 11:36:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:30.134 11:36:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:30.134 11:36:00 -- setup/common.sh@80 -- # echo 536870912 00:08:30.134 11:36:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:30.134 11:36:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:30.134 11:36:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:30.134 11:36:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:30.134 11:36:00 -- target/filesystem.sh@69 -- # partprobe 00:08:30.134 11:36:00 -- target/filesystem.sh@70 -- # sleep 1 00:08:31.068 11:36:01 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:31.068 11:36:01 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:31.068 11:36:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.068 11:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.068 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.068 ************************************ 00:08:31.068 START TEST filesystem_in_capsule_ext4 00:08:31.068 ************************************ 00:08:31.068 11:36:01 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:31.068 11:36:01 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:31.068 11:36:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.068 11:36:01 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:31.068 11:36:01 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:31.068 11:36:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:31.068 11:36:01 -- common/autotest_common.sh@914 -- # local i=0 00:08:31.068 11:36:01 -- common/autotest_common.sh@915 -- # local force 00:08:31.068 11:36:01 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:31.068 11:36:01 -- common/autotest_common.sh@918 -- # force=-F 00:08:31.068 11:36:01 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:31.325 mke2fs 1.47.0 (5-Feb-2023) 00:08:31.325 Discarding device blocks: 0/522240 done 00:08:31.325 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:31.325 Filesystem UUID: a0243f6e-20f0-48fd-a4ac-cfb7d650309b 00:08:31.325 Superblock backups stored on blocks: 00:08:31.325 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:31.325 00:08:31.325 Allocating group tables: 0/64 done 00:08:31.325 Writing inode tables: 0/64 done 00:08:31.325 Creating journal (8192 blocks): done 00:08:31.325 Writing superblocks and filesystem accounting information: 0/64 done 00:08:31.325 00:08:31.325 11:36:01 -- common/autotest_common.sh@931 -- # return 0 00:08:31.325 11:36:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.325 11:36:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.325 11:36:01 -- target/filesystem.sh@25 -- # sync 00:08:31.325 11:36:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.325 11:36:01 -- target/filesystem.sh@27 -- # sync 00:08:31.325 11:36:01 -- target/filesystem.sh@29 -- # i=0 00:08:31.325 11:36:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.325 11:36:01 -- target/filesystem.sh@37 -- # kill -0 3608874 00:08:31.325 11:36:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.325 11:36:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.325 11:36:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.325 11:36:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.325 00:08:31.325 real 0m0.195s 00:08:31.325 user 0m0.023s 00:08:31.325 sys 0m0.079s 00:08:31.325 11:36:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.325 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.325 ************************************ 00:08:31.325 END TEST filesystem_in_capsule_ext4 00:08:31.325 ************************************ 00:08:31.325 11:36:01 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.325 11:36:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.325 11:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.325 11:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.325 ************************************ 00:08:31.325 START TEST filesystem_in_capsule_btrfs 00:08:31.325 ************************************ 00:08:31.325 11:36:01 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.325 11:36:01 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.325 11:36:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.325 11:36:01 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.325 11:36:01 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:31.325 11:36:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:31.325 11:36:01 -- common/autotest_common.sh@914 -- # local i=0 00:08:31.325 11:36:01 -- common/autotest_common.sh@915 -- # local force 00:08:31.325 11:36:01 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:31.325 11:36:01 -- common/autotest_common.sh@920 -- # force=-f 00:08:31.325 11:36:01 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:31.584 btrfs-progs v6.8.1 00:08:31.584 See https://btrfs.readthedocs.io for more information. 00:08:31.584 00:08:31.584 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:31.584 NOTE: several default settings have changed in version 5.15, please make sure 00:08:31.584 this does not affect your deployments: 00:08:31.584 - DUP for metadata (-m dup) 00:08:31.584 - enabled no-holes (-O no-holes) 00:08:31.584 - enabled free-space-tree (-R free-space-tree) 00:08:31.584 00:08:31.584 Label: (null) 00:08:31.584 UUID: fc46e527-5ac7-4380-b3ff-2c3c9d0d3ae6 00:08:31.584 Node size: 16384 00:08:31.584 Sector size: 4096 (CPU page size: 4096) 00:08:31.584 Filesystem size: 510.00MiB 00:08:31.584 Block group profiles: 00:08:31.584 Data: single 8.00MiB 00:08:31.584 Metadata: DUP 32.00MiB 00:08:31.584 System: DUP 8.00MiB 00:08:31.584 SSD detected: yes 00:08:31.584 Zoned device: no 00:08:31.584 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:31.584 Checksum: crc32c 00:08:31.584 Number of devices: 1 00:08:31.584 Devices: 00:08:31.584 ID SIZE PATH 00:08:31.584 1 510.00MiB /dev/nvme0n1p1 00:08:31.584 00:08:31.584 11:36:02 -- common/autotest_common.sh@931 -- # return 0 00:08:31.584 11:36:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.584 11:36:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.584 11:36:02 -- target/filesystem.sh@25 -- # sync 00:08:31.584 11:36:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.584 11:36:02 -- target/filesystem.sh@27 -- # sync 00:08:31.584 11:36:02 -- target/filesystem.sh@29 -- # i=0 00:08:31.584 11:36:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.584 11:36:02 -- target/filesystem.sh@37 -- # kill -0 3608874 00:08:31.584 11:36:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.584 11:36:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.584 11:36:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.584 11:36:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.584 00:08:31.584 real 0m0.252s 00:08:31.584 user 0m0.034s 00:08:31.584 sys 0m0.128s 00:08:31.584 11:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.584 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.584 ************************************ 00:08:31.584 END TEST filesystem_in_capsule_btrfs 00:08:31.584 ************************************ 00:08:31.843 11:36:02 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:31.843 11:36:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.843 11:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.843 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.843 ************************************ 00:08:31.843 START TEST filesystem_in_capsule_xfs 00:08:31.843 ************************************ 00:08:31.843 11:36:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:31.843 11:36:02 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:31.843 11:36:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.843 11:36:02 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:31.843 11:36:02 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:31.843 11:36:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:31.843 11:36:02 -- common/autotest_common.sh@914 -- # local i=0 00:08:31.843 11:36:02 -- common/autotest_common.sh@915 -- # local force 00:08:31.843 11:36:02 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:31.843 11:36:02 -- common/autotest_common.sh@920 -- # force=-f 00:08:31.843 11:36:02 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:31.843 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:31.843 = sectsz=512 attr=2, projid32bit=1 00:08:31.843 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:31.843 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:31.843 data = bsize=4096 blocks=130560, imaxpct=25 00:08:31.843 = sunit=0 swidth=0 blks 00:08:31.843 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:31.843 log =internal log bsize=4096 blocks=16384, version=2 00:08:31.843 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:31.843 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:31.843 Discarding blocks...Done. 00:08:31.843 11:36:02 -- common/autotest_common.sh@931 -- # return 0 00:08:31.843 11:36:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.843 11:36:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.843 11:36:02 -- target/filesystem.sh@25 -- # sync 00:08:31.843 11:36:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.843 11:36:02 -- target/filesystem.sh@27 -- # sync 00:08:31.843 11:36:02 -- target/filesystem.sh@29 -- # i=0 00:08:31.843 11:36:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.843 11:36:02 -- target/filesystem.sh@37 -- # kill -0 3608874 00:08:31.843 11:36:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.843 11:36:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.843 11:36:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.843 11:36:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.843 00:08:31.843 real 0m0.198s 00:08:31.843 user 0m0.019s 00:08:31.843 sys 0m0.090s 00:08:31.843 11:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.843 11:36:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.843 ************************************ 00:08:31.843 END TEST filesystem_in_capsule_xfs 00:08:31.843 ************************************ 00:08:32.102 11:36:02 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.102 11:36:02 -- target/filesystem.sh@93 -- # sync 00:08:32.102 11:36:02 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.037 11:36:03 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.037 11:36:03 -- common/autotest_common.sh@1208 -- # local i=0 00:08:33.037 11:36:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:33.037 11:36:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.037 11:36:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:33.037 11:36:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.037 11:36:03 -- common/autotest_common.sh@1220 -- # return 0 00:08:33.037 11:36:03 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.037 11:36:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.037 11:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.037 11:36:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.037 11:36:03 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.037 11:36:03 -- target/filesystem.sh@101 -- # killprocess 3608874 00:08:33.037 11:36:03 -- common/autotest_common.sh@936 -- # '[' -z 3608874 ']' 00:08:33.037 11:36:03 -- common/autotest_common.sh@940 -- # kill -0 3608874 00:08:33.037 11:36:03 -- common/autotest_common.sh@941 -- # uname 00:08:33.037 11:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.037 11:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3608874 00:08:33.037 11:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.037 11:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.037 11:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3608874' 00:08:33.037 killing process with pid 3608874 00:08:33.037 11:36:03 -- common/autotest_common.sh@955 -- # kill 3608874 00:08:33.037 11:36:03 -- common/autotest_common.sh@960 -- # wait 3608874 00:08:33.606 11:36:03 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.606 00:08:33.606 real 0m7.834s 00:08:33.606 user 0m30.408s 00:08:33.606 sys 0m1.212s 00:08:33.606 11:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.606 11:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.606 ************************************ 00:08:33.606 END TEST nvmf_filesystem_in_capsule 00:08:33.606 ************************************ 00:08:33.606 11:36:04 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:33.606 11:36:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:33.606 11:36:04 -- nvmf/common.sh@116 -- # sync 00:08:33.606 11:36:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:33.606 11:36:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:33.606 11:36:04 -- nvmf/common.sh@119 -- # set +e 00:08:33.606 11:36:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:33.606 11:36:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:33.606 rmmod nvme_rdma 00:08:33.606 rmmod nvme_fabrics 00:08:33.606 11:36:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:33.606 11:36:04 -- nvmf/common.sh@123 -- # set -e 00:08:33.606 11:36:04 -- nvmf/common.sh@124 -- # return 0 00:08:33.606 11:36:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:33.606 11:36:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:33.606 11:36:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:33.606 00:08:33.606 real 0m22.871s 00:08:33.606 user 1m3.006s 00:08:33.606 sys 0m7.648s 00:08:33.606 11:36:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.606 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.606 ************************************ 00:08:33.606 END TEST nvmf_filesystem 00:08:33.606 ************************************ 00:08:33.606 11:36:04 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:33.606 11:36:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.606 11:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.606 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.606 ************************************ 00:08:33.606 START TEST nvmf_discovery 00:08:33.606 ************************************ 00:08:33.606 11:36:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:33.606 * Looking for test storage... 00:08:33.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:33.865 11:36:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:33.865 11:36:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:33.865 11:36:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:33.865 11:36:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:33.865 11:36:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:33.865 11:36:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:33.865 11:36:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:33.865 11:36:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:33.865 11:36:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:33.865 11:36:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.865 11:36:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:33.865 11:36:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:33.865 11:36:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:33.865 11:36:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:33.865 11:36:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:33.865 11:36:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:33.865 11:36:04 -- scripts/common.sh@344 -- # : 1 00:08:33.865 11:36:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:33.865 11:36:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.865 11:36:04 -- scripts/common.sh@364 -- # decimal 1 00:08:33.865 11:36:04 -- scripts/common.sh@352 -- # local d=1 00:08:33.865 11:36:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.865 11:36:04 -- scripts/common.sh@354 -- # echo 1 00:08:33.865 11:36:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:33.865 11:36:04 -- scripts/common.sh@365 -- # decimal 2 00:08:33.865 11:36:04 -- scripts/common.sh@352 -- # local d=2 00:08:33.865 11:36:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.865 11:36:04 -- scripts/common.sh@354 -- # echo 2 00:08:33.865 11:36:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:33.865 11:36:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:33.865 11:36:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:33.865 11:36:04 -- scripts/common.sh@367 -- # return 0 00:08:33.865 11:36:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.865 11:36:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:33.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.865 --rc genhtml_branch_coverage=1 00:08:33.865 --rc genhtml_function_coverage=1 00:08:33.865 --rc genhtml_legend=1 00:08:33.865 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 11:36:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 11:36:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 11:36:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 11:36:04 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.866 11:36:04 -- nvmf/common.sh@7 -- # uname -s 00:08:33.866 11:36:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.866 11:36:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.866 11:36:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.866 11:36:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.866 11:36:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.866 11:36:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.866 11:36:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.866 11:36:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.866 11:36:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.866 11:36:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.866 11:36:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:33.866 11:36:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:33.866 11:36:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.866 11:36:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.866 11:36:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.866 11:36:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:33.866 11:36:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.866 11:36:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.866 11:36:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.866 11:36:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.866 11:36:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.866 11:36:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.866 11:36:04 -- paths/export.sh@5 -- # export PATH 00:08:33.866 11:36:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.866 11:36:04 -- nvmf/common.sh@46 -- # : 0 00:08:33.866 11:36:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:33.866 11:36:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:33.866 11:36:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:33.866 11:36:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.866 11:36:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.866 11:36:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:33.866 11:36:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:33.866 11:36:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:33.866 11:36:04 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:33.866 11:36:04 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:33.866 11:36:04 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:33.866 11:36:04 -- target/discovery.sh@15 -- # hash nvme 00:08:33.866 11:36:04 -- target/discovery.sh@20 -- # nvmftestinit 00:08:33.866 11:36:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:33.866 11:36:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.866 11:36:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:33.866 11:36:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:33.866 11:36:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:33.866 11:36:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.866 11:36:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.866 11:36:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.866 11:36:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:33.866 11:36:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:33.866 11:36:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:33.866 11:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:40.429 11:36:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:40.429 11:36:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:40.429 11:36:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:40.429 11:36:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:40.429 11:36:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:40.429 11:36:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:40.429 11:36:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:40.429 11:36:10 -- nvmf/common.sh@294 -- # net_devs=() 00:08:40.429 11:36:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:40.429 11:36:10 -- nvmf/common.sh@295 -- # e810=() 00:08:40.429 11:36:10 -- nvmf/common.sh@295 -- # local -ga e810 00:08:40.429 11:36:10 -- nvmf/common.sh@296 -- # x722=() 00:08:40.429 11:36:10 -- nvmf/common.sh@296 -- # local -ga x722 00:08:40.429 11:36:10 -- nvmf/common.sh@297 -- # mlx=() 00:08:40.429 11:36:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:40.429 11:36:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.429 11:36:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:40.429 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:40.429 11:36:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.429 11:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:40.429 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:40.429 11:36:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.429 11:36:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.429 11:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.429 11:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:40.429 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:40.429 11:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.429 11:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.429 11:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:40.429 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:40.429 11:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.429 11:36:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:40.429 11:36:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:40.429 11:36:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:40.429 11:36:10 -- nvmf/common.sh@57 -- # uname 00:08:40.429 11:36:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:40.429 11:36:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:40.429 11:36:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:40.429 11:36:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:40.429 11:36:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:40.429 11:36:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:40.429 11:36:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:40.429 11:36:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:40.429 11:36:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:40.429 11:36:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:40.429 11:36:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:40.429 11:36:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.429 11:36:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:40.429 11:36:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:40.429 11:36:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.429 11:36:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:40.429 11:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:40.429 11:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:40.429 11:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.429 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:40.429 11:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:40.429 11:36:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:40.429 11:36:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:40.429 11:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:40.429 11:36:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:40.429 11:36:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:40.429 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.429 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:40.429 altname enp217s0f0np0 00:08:40.429 altname ens818f0np0 00:08:40.429 inet 192.168.100.8/24 scope global mlx_0_0 00:08:40.429 valid_lft forever preferred_lft forever 00:08:40.429 11:36:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:40.429 11:36:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:40.429 11:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:40.429 11:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:40.429 11:36:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:40.429 11:36:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:40.429 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.429 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:40.429 altname enp217s0f1np1 00:08:40.429 altname ens818f1np1 00:08:40.429 inet 192.168.100.9/24 scope global mlx_0_1 00:08:40.429 valid_lft forever preferred_lft forever 00:08:40.429 11:36:10 -- nvmf/common.sh@410 -- # return 0 00:08:40.429 11:36:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:40.429 11:36:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:40.429 11:36:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:40.429 11:36:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:40.429 11:36:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:40.430 11:36:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.430 11:36:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:40.430 11:36:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:40.430 11:36:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.430 11:36:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:40.430 11:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:40.430 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.430 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.430 11:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:40.430 11:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:40.430 11:36:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:40.430 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.430 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.430 11:36:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.430 11:36:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.430 11:36:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:40.430 11:36:10 -- nvmf/common.sh@104 -- # continue 2 00:08:40.430 11:36:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:40.430 11:36:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:40.430 11:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:40.430 11:36:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:40.430 11:36:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:40.430 11:36:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:40.430 11:36:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:40.430 11:36:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:40.430 192.168.100.9' 00:08:40.430 11:36:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:40.430 192.168.100.9' 00:08:40.430 11:36:10 -- nvmf/common.sh@445 -- # head -n 1 00:08:40.430 11:36:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:40.430 11:36:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:40.430 192.168.100.9' 00:08:40.430 11:36:10 -- nvmf/common.sh@446 -- # head -n 1 00:08:40.430 11:36:10 -- nvmf/common.sh@446 -- # tail -n +2 00:08:40.430 11:36:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:40.430 11:36:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:40.430 11:36:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:40.430 11:36:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:40.430 11:36:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:40.430 11:36:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:40.430 11:36:10 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:40.430 11:36:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:40.430 11:36:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.430 11:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 11:36:10 -- nvmf/common.sh@469 -- # nvmfpid=3614208 00:08:40.430 11:36:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.430 11:36:10 -- nvmf/common.sh@470 -- # waitforlisten 3614208 00:08:40.430 11:36:10 -- common/autotest_common.sh@829 -- # '[' -z 3614208 ']' 00:08:40.430 11:36:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.430 11:36:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.430 11:36:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.430 11:36:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.430 11:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 [2024-12-03 11:36:10.453492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.430 [2024-12-03 11:36:10.453537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.430 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.430 [2024-12-03 11:36:10.521855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.430 [2024-12-03 11:36:10.595872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.430 [2024-12-03 11:36:10.596004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.430 [2024-12-03 11:36:10.596014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.430 [2024-12-03 11:36:10.596023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.430 [2024-12-03 11:36:10.596088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.430 [2024-12-03 11:36:10.596192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.430 [2024-12-03 11:36:10.596215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.430 [2024-12-03 11:36:10.596217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.688 11:36:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.688 11:36:11 -- common/autotest_common.sh@862 -- # return 0 00:08:40.688 11:36:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.688 11:36:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.688 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.947 11:36:11 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 [2024-12-03 11:36:11.343005] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x926090/0x92a580) succeed. 00:08:40.947 [2024-12-03 11:36:11.352045] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x927680/0x96bc20) succeed. 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@26 -- # seq 1 4 00:08:40.947 11:36:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.947 11:36:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 Null1 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 [2024-12-03 11:36:11.519044] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.947 11:36:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 Null2 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:40.947 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.947 11:36:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.947 11:36:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:40.947 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.947 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 Null3 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:41.207 11:36:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 Null4 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:41.207 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 11:36:11 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:41.207 00:08:41.207 Discovery Log Number of Records 6, Generation counter 6 00:08:41.207 =====Discovery Log Entry 0====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: current discovery subsystem 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4420 00:08:41.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: explicit discovery connections, duplicate discovery information 00:08:41.208 rdma_prtype: not specified 00:08:41.208 rdma_qptype: connected 00:08:41.208 rdma_cms: rdma-cm 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 =====Discovery Log Entry 1====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: nvme subsystem 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4420 00:08:41.208 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: none 00:08:41.208 rdma_prtype: not specified 00:08:41.208 rdma_qptype: connected 00:08:41.208 rdma_cms: rdma-cm 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 =====Discovery Log Entry 2====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: nvme subsystem 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4420 00:08:41.208 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: none 00:08:41.208 rdma_prtype: not specified 00:08:41.208 rdma_qptype: connected 00:08:41.208 rdma_cms: rdma-cm 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 =====Discovery Log Entry 3====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: nvme subsystem 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4420 00:08:41.208 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: none 00:08:41.208 rdma_prtype: not specified 00:08:41.208 rdma_qptype: connected 00:08:41.208 rdma_cms: rdma-cm 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 =====Discovery Log Entry 4====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: nvme subsystem 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4420 00:08:41.208 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: none 00:08:41.208 rdma_prtype: not specified 00:08:41.208 rdma_qptype: connected 00:08:41.208 rdma_cms: rdma-cm 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 =====Discovery Log Entry 5====== 00:08:41.208 trtype: rdma 00:08:41.208 adrfam: ipv4 00:08:41.208 subtype: discovery subsystem referral 00:08:41.208 treq: not required 00:08:41.208 portid: 0 00:08:41.208 trsvcid: 4430 00:08:41.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:41.208 traddr: 192.168.100.8 00:08:41.208 eflags: none 00:08:41.208 rdma_prtype: unrecognized 00:08:41.208 rdma_qptype: unrecognized 00:08:41.208 rdma_cms: unrecognized 00:08:41.208 rdma_pkey: 0x0000 00:08:41.208 11:36:11 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:41.208 Perform nvmf subsystem discovery via RPC 00:08:41.208 11:36:11 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:41.208 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.208 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.208 [2024-12-03 11:36:11.739469] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:41.208 [ 00:08:41.208 { 00:08:41.208 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:41.208 "subtype": "Discovery", 00:08:41.208 "listen_addresses": [ 00:08:41.208 { 00:08:41.208 "transport": "RDMA", 00:08:41.208 "trtype": "RDMA", 00:08:41.208 "adrfam": "IPv4", 00:08:41.208 "traddr": "192.168.100.8", 00:08:41.208 "trsvcid": "4420" 00:08:41.208 } 00:08:41.208 ], 00:08:41.208 "allow_any_host": true, 00:08:41.208 "hosts": [] 00:08:41.208 }, 00:08:41.208 { 00:08:41.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.208 "subtype": "NVMe", 00:08:41.208 "listen_addresses": [ 00:08:41.208 { 00:08:41.208 "transport": "RDMA", 00:08:41.208 "trtype": "RDMA", 00:08:41.208 "adrfam": "IPv4", 00:08:41.208 "traddr": "192.168.100.8", 00:08:41.208 "trsvcid": "4420" 00:08:41.208 } 00:08:41.208 ], 00:08:41.208 "allow_any_host": true, 00:08:41.208 "hosts": [], 00:08:41.208 "serial_number": "SPDK00000000000001", 00:08:41.208 "model_number": "SPDK bdev Controller", 00:08:41.208 "max_namespaces": 32, 00:08:41.208 "min_cntlid": 1, 00:08:41.208 "max_cntlid": 65519, 00:08:41.208 "namespaces": [ 00:08:41.208 { 00:08:41.208 "nsid": 1, 00:08:41.208 "bdev_name": "Null1", 00:08:41.208 "name": "Null1", 00:08:41.208 "nguid": "BDC55BDDA42848268B6931C409F3EF9D", 00:08:41.208 "uuid": "bdc55bdd-a428-4826-8b69-31c409f3ef9d" 00:08:41.208 } 00:08:41.208 ] 00:08:41.208 }, 00:08:41.208 { 00:08:41.208 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:41.208 "subtype": "NVMe", 00:08:41.208 "listen_addresses": [ 00:08:41.208 { 00:08:41.208 "transport": "RDMA", 00:08:41.208 "trtype": "RDMA", 00:08:41.208 "adrfam": "IPv4", 00:08:41.208 "traddr": "192.168.100.8", 00:08:41.208 "trsvcid": "4420" 00:08:41.208 } 00:08:41.208 ], 00:08:41.208 "allow_any_host": true, 00:08:41.208 "hosts": [], 00:08:41.208 "serial_number": "SPDK00000000000002", 00:08:41.208 "model_number": "SPDK bdev Controller", 00:08:41.208 "max_namespaces": 32, 00:08:41.208 "min_cntlid": 1, 00:08:41.208 "max_cntlid": 65519, 00:08:41.208 "namespaces": [ 00:08:41.208 { 00:08:41.208 "nsid": 1, 00:08:41.208 "bdev_name": "Null2", 00:08:41.208 "name": "Null2", 00:08:41.208 "nguid": "B903AA82088641C895CA5B877FBBF761", 00:08:41.208 "uuid": "b903aa82-0886-41c8-95ca-5b877fbbf761" 00:08:41.208 } 00:08:41.208 ] 00:08:41.208 }, 00:08:41.208 { 00:08:41.208 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:41.208 "subtype": "NVMe", 00:08:41.208 "listen_addresses": [ 00:08:41.208 { 00:08:41.208 "transport": "RDMA", 00:08:41.208 "trtype": "RDMA", 00:08:41.208 "adrfam": "IPv4", 00:08:41.208 "traddr": "192.168.100.8", 00:08:41.209 "trsvcid": "4420" 00:08:41.209 } 00:08:41.209 ], 00:08:41.209 "allow_any_host": true, 00:08:41.209 "hosts": [], 00:08:41.209 "serial_number": "SPDK00000000000003", 00:08:41.209 "model_number": "SPDK bdev Controller", 00:08:41.209 "max_namespaces": 32, 00:08:41.209 "min_cntlid": 1, 00:08:41.209 "max_cntlid": 65519, 00:08:41.209 "namespaces": [ 00:08:41.209 { 00:08:41.209 "nsid": 1, 00:08:41.209 "bdev_name": "Null3", 00:08:41.209 "name": "Null3", 00:08:41.209 "nguid": "4FC27DADB1384A69A413BB6A34CA680F", 00:08:41.209 "uuid": "4fc27dad-b138-4a69-a413-bb6a34ca680f" 00:08:41.209 } 00:08:41.209 ] 00:08:41.209 }, 00:08:41.209 { 00:08:41.209 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:41.209 "subtype": "NVMe", 00:08:41.209 "listen_addresses": [ 00:08:41.209 { 00:08:41.209 "transport": "RDMA", 00:08:41.209 "trtype": "RDMA", 00:08:41.209 "adrfam": "IPv4", 00:08:41.209 "traddr": "192.168.100.8", 00:08:41.209 "trsvcid": "4420" 00:08:41.209 } 00:08:41.209 ], 00:08:41.209 "allow_any_host": true, 00:08:41.209 "hosts": [], 00:08:41.209 "serial_number": "SPDK00000000000004", 00:08:41.209 "model_number": "SPDK bdev Controller", 00:08:41.209 "max_namespaces": 32, 00:08:41.209 "min_cntlid": 1, 00:08:41.209 "max_cntlid": 65519, 00:08:41.209 "namespaces": [ 00:08:41.209 { 00:08:41.209 "nsid": 1, 00:08:41.209 "bdev_name": "Null4", 00:08:41.209 "name": "Null4", 00:08:41.209 "nguid": "883A67A4FE3945B58FCC976047052326", 00:08:41.209 "uuid": "883a67a4-fe39-45b5-8fcc-976047052326" 00:08:41.209 } 00:08:41.209 ] 00:08:41.209 } 00:08:41.209 ] 00:08:41.209 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.209 11:36:11 -- target/discovery.sh@42 -- # seq 1 4 00:08:41.209 11:36:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.209 11:36:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.209 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.209 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.209 11:36:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:41.209 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.209 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.209 11:36:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.209 11:36:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:41.209 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.209 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.209 11:36:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:41.209 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.209 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.209 11:36:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.209 11:36:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:41.209 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.209 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:41.469 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.469 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:41.469 11:36:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:41.469 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.469 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:41.469 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.469 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:41.469 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.469 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:41.469 11:36:11 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:41.469 11:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.469 11:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.469 11:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.469 11:36:11 -- target/discovery.sh@49 -- # check_bdevs= 00:08:41.469 11:36:11 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:41.469 11:36:11 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:41.469 11:36:11 -- target/discovery.sh@57 -- # nvmftestfini 00:08:41.469 11:36:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:41.469 11:36:11 -- nvmf/common.sh@116 -- # sync 00:08:41.469 11:36:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:41.469 11:36:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:41.469 11:36:11 -- nvmf/common.sh@119 -- # set +e 00:08:41.469 11:36:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:41.469 11:36:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:41.469 rmmod nvme_rdma 00:08:41.469 rmmod nvme_fabrics 00:08:41.469 11:36:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:41.469 11:36:11 -- nvmf/common.sh@123 -- # set -e 00:08:41.469 11:36:11 -- nvmf/common.sh@124 -- # return 0 00:08:41.469 11:36:11 -- nvmf/common.sh@477 -- # '[' -n 3614208 ']' 00:08:41.469 11:36:11 -- nvmf/common.sh@478 -- # killprocess 3614208 00:08:41.469 11:36:11 -- common/autotest_common.sh@936 -- # '[' -z 3614208 ']' 00:08:41.469 11:36:11 -- common/autotest_common.sh@940 -- # kill -0 3614208 00:08:41.469 11:36:11 -- common/autotest_common.sh@941 -- # uname 00:08:41.469 11:36:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.469 11:36:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3614208 00:08:41.469 11:36:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.469 11:36:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.469 11:36:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3614208' 00:08:41.469 killing process with pid 3614208 00:08:41.469 11:36:12 -- common/autotest_common.sh@955 -- # kill 3614208 00:08:41.469 [2024-12-03 11:36:12.014401] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:41.469 11:36:12 -- common/autotest_common.sh@960 -- # wait 3614208 00:08:41.728 11:36:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:41.728 11:36:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:41.728 00:08:41.728 real 0m8.141s 00:08:41.728 user 0m8.357s 00:08:41.728 sys 0m5.163s 00:08:41.728 11:36:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.728 11:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 ************************************ 00:08:41.728 END TEST nvmf_discovery 00:08:41.728 ************************************ 00:08:41.728 11:36:12 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:41.728 11:36:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.728 11:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.728 11:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 ************************************ 00:08:41.728 START TEST nvmf_referrals 00:08:41.728 ************************************ 00:08:41.728 11:36:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:41.988 * Looking for test storage... 00:08:41.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.988 11:36:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.988 11:36:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.988 11:36:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.988 11:36:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.988 11:36:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.988 11:36:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.988 11:36:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:41.988 11:36:12 -- scripts/common.sh@335 -- # IFS=.-: 00:08:41.988 11:36:12 -- scripts/common.sh@335 -- # read -ra ver1 00:08:41.988 11:36:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.988 11:36:12 -- scripts/common.sh@336 -- # read -ra ver2 00:08:41.988 11:36:12 -- scripts/common.sh@337 -- # local 'op=<' 00:08:41.988 11:36:12 -- scripts/common.sh@339 -- # ver1_l=2 00:08:41.988 11:36:12 -- scripts/common.sh@340 -- # ver2_l=1 00:08:41.988 11:36:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:41.988 11:36:12 -- scripts/common.sh@343 -- # case "$op" in 00:08:41.988 11:36:12 -- scripts/common.sh@344 -- # : 1 00:08:41.988 11:36:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:41.988 11:36:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.988 11:36:12 -- scripts/common.sh@364 -- # decimal 1 00:08:41.988 11:36:12 -- scripts/common.sh@352 -- # local d=1 00:08:41.988 11:36:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.988 11:36:12 -- scripts/common.sh@354 -- # echo 1 00:08:41.988 11:36:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:41.988 11:36:12 -- scripts/common.sh@365 -- # decimal 2 00:08:41.988 11:36:12 -- scripts/common.sh@352 -- # local d=2 00:08:41.988 11:36:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.988 11:36:12 -- scripts/common.sh@354 -- # echo 2 00:08:41.988 11:36:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:41.988 11:36:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:41.988 11:36:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:41.988 11:36:12 -- scripts/common.sh@367 -- # return 0 00:08:41.988 11:36:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.988 11:36:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:41.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.988 --rc genhtml_branch_coverage=1 00:08:41.988 --rc genhtml_function_coverage=1 00:08:41.988 --rc genhtml_legend=1 00:08:41.988 --rc geninfo_all_blocks=1 00:08:41.988 --rc geninfo_unexecuted_blocks=1 00:08:41.988 00:08:41.988 ' 00:08:41.988 11:36:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:41.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.988 --rc genhtml_branch_coverage=1 00:08:41.988 --rc genhtml_function_coverage=1 00:08:41.988 --rc genhtml_legend=1 00:08:41.988 --rc geninfo_all_blocks=1 00:08:41.988 --rc geninfo_unexecuted_blocks=1 00:08:41.988 00:08:41.988 ' 00:08:41.988 11:36:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:41.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.988 --rc genhtml_branch_coverage=1 00:08:41.988 --rc genhtml_function_coverage=1 00:08:41.988 --rc genhtml_legend=1 00:08:41.988 --rc geninfo_all_blocks=1 00:08:41.988 --rc geninfo_unexecuted_blocks=1 00:08:41.988 00:08:41.988 ' 00:08:41.988 11:36:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:41.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.988 --rc genhtml_branch_coverage=1 00:08:41.988 --rc genhtml_function_coverage=1 00:08:41.988 --rc genhtml_legend=1 00:08:41.988 --rc geninfo_all_blocks=1 00:08:41.988 --rc geninfo_unexecuted_blocks=1 00:08:41.988 00:08:41.988 ' 00:08:41.988 11:36:12 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.988 11:36:12 -- nvmf/common.sh@7 -- # uname -s 00:08:41.988 11:36:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.988 11:36:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.988 11:36:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.988 11:36:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.988 11:36:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.988 11:36:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.988 11:36:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.988 11:36:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.988 11:36:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.988 11:36:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.988 11:36:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:41.988 11:36:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:41.988 11:36:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.988 11:36:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.988 11:36:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.988 11:36:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.988 11:36:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.988 11:36:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.988 11:36:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.988 11:36:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.988 11:36:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.988 11:36:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.988 11:36:12 -- paths/export.sh@5 -- # export PATH 00:08:41.988 11:36:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.988 11:36:12 -- nvmf/common.sh@46 -- # : 0 00:08:41.988 11:36:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.988 11:36:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.988 11:36:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.988 11:36:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.988 11:36:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.988 11:36:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.988 11:36:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.988 11:36:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.988 11:36:12 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:41.988 11:36:12 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:41.988 11:36:12 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:41.988 11:36:12 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:41.988 11:36:12 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:41.988 11:36:12 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:41.988 11:36:12 -- target/referrals.sh@37 -- # nvmftestinit 00:08:41.988 11:36:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:41.988 11:36:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.988 11:36:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.988 11:36:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.988 11:36:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.988 11:36:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.988 11:36:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.988 11:36:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.988 11:36:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:41.988 11:36:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:41.988 11:36:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:41.988 11:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 11:36:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:48.556 11:36:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:48.556 11:36:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:48.556 11:36:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:48.556 11:36:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:48.556 11:36:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:48.556 11:36:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:48.556 11:36:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:48.556 11:36:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:48.556 11:36:18 -- nvmf/common.sh@295 -- # e810=() 00:08:48.556 11:36:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:48.556 11:36:18 -- nvmf/common.sh@296 -- # x722=() 00:08:48.556 11:36:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:48.556 11:36:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:48.556 11:36:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:48.556 11:36:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.556 11:36:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.556 11:36:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.556 11:36:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.556 11:36:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.556 11:36:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.557 11:36:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:48.557 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:48.557 11:36:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.557 11:36:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:48.557 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:48.557 11:36:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.557 11:36:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.557 11:36:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.557 11:36:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:48.557 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:48.557 11:36:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.557 11:36:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.557 11:36:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:48.557 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:48.557 11:36:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.557 11:36:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:48.557 11:36:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:48.557 11:36:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:48.557 11:36:18 -- nvmf/common.sh@57 -- # uname 00:08:48.557 11:36:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:48.557 11:36:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:48.557 11:36:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:48.557 11:36:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:48.557 11:36:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:48.557 11:36:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:48.557 11:36:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:48.557 11:36:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:48.557 11:36:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:48.557 11:36:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.557 11:36:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:48.557 11:36:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.557 11:36:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:48.557 11:36:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:48.557 11:36:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.557 11:36:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:48.557 11:36:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:48.557 11:36:18 -- nvmf/common.sh@104 -- # continue 2 00:08:48.557 11:36:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.557 11:36:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:48.557 11:36:18 -- nvmf/common.sh@104 -- # continue 2 00:08:48.557 11:36:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:48.557 11:36:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:48.557 11:36:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.557 11:36:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:48.557 11:36:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:48.557 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.557 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:48.557 altname enp217s0f0np0 00:08:48.557 altname ens818f0np0 00:08:48.557 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.557 valid_lft forever preferred_lft forever 00:08:48.557 11:36:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:48.557 11:36:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:48.557 11:36:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.557 11:36:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.557 11:36:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:48.557 11:36:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:48.557 11:36:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:48.557 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.557 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:48.557 altname enp217s0f1np1 00:08:48.557 altname ens818f1np1 00:08:48.557 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.557 valid_lft forever preferred_lft forever 00:08:48.557 11:36:18 -- nvmf/common.sh@410 -- # return 0 00:08:48.557 11:36:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:48.557 11:36:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.557 11:36:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:48.558 11:36:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:48.558 11:36:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:48.558 11:36:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.558 11:36:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:48.558 11:36:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:48.558 11:36:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.558 11:36:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:48.558 11:36:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.558 11:36:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.558 11:36:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.558 11:36:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:48.558 11:36:19 -- nvmf/common.sh@104 -- # continue 2 00:08:48.558 11:36:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.558 11:36:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.558 11:36:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.558 11:36:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.558 11:36:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.558 11:36:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:48.558 11:36:19 -- nvmf/common.sh@104 -- # continue 2 00:08:48.558 11:36:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:48.558 11:36:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:48.558 11:36:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.558 11:36:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:48.558 11:36:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:48.558 11:36:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.558 11:36:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.558 11:36:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.558 192.168.100.9' 00:08:48.558 11:36:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:48.558 192.168.100.9' 00:08:48.558 11:36:19 -- nvmf/common.sh@445 -- # head -n 1 00:08:48.558 11:36:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.558 11:36:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:48.558 192.168.100.9' 00:08:48.558 11:36:19 -- nvmf/common.sh@446 -- # tail -n +2 00:08:48.558 11:36:19 -- nvmf/common.sh@446 -- # head -n 1 00:08:48.558 11:36:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.558 11:36:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:48.558 11:36:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.558 11:36:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:48.558 11:36:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:48.558 11:36:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:48.558 11:36:19 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:48.558 11:36:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:48.558 11:36:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.558 11:36:19 -- common/autotest_common.sh@10 -- # set +x 00:08:48.558 11:36:19 -- nvmf/common.sh@469 -- # nvmfpid=3617735 00:08:48.558 11:36:19 -- nvmf/common.sh@470 -- # waitforlisten 3617735 00:08:48.558 11:36:19 -- common/autotest_common.sh@829 -- # '[' -z 3617735 ']' 00:08:48.558 11:36:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.558 11:36:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.558 11:36:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.558 11:36:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.558 11:36:19 -- common/autotest_common.sh@10 -- # set +x 00:08:48.558 11:36:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.558 [2024-12-03 11:36:19.149427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.558 [2024-12-03 11:36:19.149475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.817 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.817 [2024-12-03 11:36:19.220018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.817 [2024-12-03 11:36:19.293790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.817 [2024-12-03 11:36:19.293895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.817 [2024-12-03 11:36:19.293905] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.817 [2024-12-03 11:36:19.293913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.817 [2024-12-03 11:36:19.293957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.817 [2024-12-03 11:36:19.293975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.817 [2024-12-03 11:36:19.294079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.817 [2024-12-03 11:36:19.294081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.472 11:36:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.472 11:36:19 -- common/autotest_common.sh@862 -- # return 0 00:08:49.472 11:36:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:49.472 11:36:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.472 11:36:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 11:36:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.472 11:36:20 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.472 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.472 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 [2024-12-03 11:36:20.038310] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2396090/0x239a580) succeed. 00:08:49.472 [2024-12-03 11:36:20.047442] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2397680/0x23dbc20) succeed. 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 [2024-12-03 11:36:20.173464] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.731 11:36:20 -- target/referrals.sh@48 -- # jq length 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:49.731 11:36:20 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:49.731 11:36:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.731 11:36:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.731 11:36:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.731 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.731 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.731 11:36:20 -- target/referrals.sh@21 -- # sort 00:08:49.731 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:49.731 11:36:20 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:49.731 11:36:20 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:49.731 11:36:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.731 11:36:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.731 11:36:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.731 11:36:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.731 11:36:20 -- target/referrals.sh@26 -- # sort 00:08:49.990 11:36:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:49.990 11:36:20 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- target/referrals.sh@56 -- # jq length 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:49.990 11:36:20 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:49.990 11:36:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.990 11:36:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.990 11:36:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.990 11:36:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.990 11:36:20 -- target/referrals.sh@26 -- # sort 00:08:49.990 11:36:20 -- target/referrals.sh@26 -- # echo 00:08:49.990 11:36:20 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:49.990 11:36:20 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.990 11:36:20 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:49.990 11:36:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.990 11:36:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.990 11:36:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.990 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.990 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.990 11:36:20 -- target/referrals.sh@21 -- # sort 00:08:50.249 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.249 11:36:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:50.249 11:36:20 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:50.249 11:36:20 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:50.249 11:36:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.249 11:36:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.249 11:36:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.249 11:36:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.249 11:36:20 -- target/referrals.sh@26 -- # sort 00:08:50.249 11:36:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:50.249 11:36:20 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:50.249 11:36:20 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:50.249 11:36:20 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:50.249 11:36:20 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:50.249 11:36:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.249 11:36:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:50.249 11:36:20 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:50.249 11:36:20 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:50.249 11:36:20 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:50.249 11:36:20 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:50.249 11:36:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.249 11:36:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:50.507 11:36:20 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:50.507 11:36:20 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:50.507 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.507 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:50.507 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.507 11:36:20 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:50.507 11:36:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:50.507 11:36:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.507 11:36:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:50.507 11:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.507 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:08:50.507 11:36:20 -- target/referrals.sh@21 -- # sort 00:08:50.507 11:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.507 11:36:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:50.507 11:36:21 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:50.507 11:36:21 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:50.507 11:36:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.507 11:36:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.507 11:36:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.507 11:36:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.507 11:36:21 -- target/referrals.sh@26 -- # sort 00:08:50.766 11:36:21 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:50.766 11:36:21 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:50.766 11:36:21 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:50.766 11:36:21 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:50.766 11:36:21 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:50.766 11:36:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.766 11:36:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:50.766 11:36:21 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:50.766 11:36:21 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:50.766 11:36:21 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:50.766 11:36:21 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:50.766 11:36:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:50.766 11:36:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.766 11:36:21 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:50.766 11:36:21 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:50.766 11:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.766 11:36:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.766 11:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.766 11:36:21 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.766 11:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.766 11:36:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.766 11:36:21 -- target/referrals.sh@82 -- # jq length 00:08:50.766 11:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.025 11:36:21 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:51.025 11:36:21 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:51.025 11:36:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.025 11:36:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.025 11:36:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.025 11:36:21 -- target/referrals.sh@26 -- # sort 00:08:51.025 11:36:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.025 11:36:21 -- target/referrals.sh@26 -- # echo 00:08:51.025 11:36:21 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:51.025 11:36:21 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:51.025 11:36:21 -- target/referrals.sh@86 -- # nvmftestfini 00:08:51.025 11:36:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:51.025 11:36:21 -- nvmf/common.sh@116 -- # sync 00:08:51.025 11:36:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:51.025 11:36:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:51.025 11:36:21 -- nvmf/common.sh@119 -- # set +e 00:08:51.025 11:36:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:51.025 11:36:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:51.025 rmmod nvme_rdma 00:08:51.025 rmmod nvme_fabrics 00:08:51.025 11:36:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:51.025 11:36:21 -- nvmf/common.sh@123 -- # set -e 00:08:51.025 11:36:21 -- nvmf/common.sh@124 -- # return 0 00:08:51.025 11:36:21 -- nvmf/common.sh@477 -- # '[' -n 3617735 ']' 00:08:51.025 11:36:21 -- nvmf/common.sh@478 -- # killprocess 3617735 00:08:51.025 11:36:21 -- common/autotest_common.sh@936 -- # '[' -z 3617735 ']' 00:08:51.025 11:36:21 -- common/autotest_common.sh@940 -- # kill -0 3617735 00:08:51.025 11:36:21 -- common/autotest_common.sh@941 -- # uname 00:08:51.025 11:36:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:51.025 11:36:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3617735 00:08:51.025 11:36:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:51.025 11:36:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:51.025 11:36:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3617735' 00:08:51.025 killing process with pid 3617735 00:08:51.025 11:36:21 -- common/autotest_common.sh@955 -- # kill 3617735 00:08:51.025 11:36:21 -- common/autotest_common.sh@960 -- # wait 3617735 00:08:51.283 11:36:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:51.283 11:36:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:51.283 00:08:51.283 real 0m9.546s 00:08:51.283 user 0m12.817s 00:08:51.283 sys 0m5.921s 00:08:51.283 11:36:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.283 11:36:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.283 ************************************ 00:08:51.283 END TEST nvmf_referrals 00:08:51.283 ************************************ 00:08:51.543 11:36:21 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:51.543 11:36:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.543 11:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.543 11:36:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.543 ************************************ 00:08:51.543 START TEST nvmf_connect_disconnect 00:08:51.543 ************************************ 00:08:51.543 11:36:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:51.543 * Looking for test storage... 00:08:51.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:51.543 11:36:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:51.543 11:36:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:51.543 11:36:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:51.543 11:36:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:51.543 11:36:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:51.543 11:36:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:51.543 11:36:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:51.543 11:36:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:51.543 11:36:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:51.543 11:36:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.543 11:36:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:51.543 11:36:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:51.543 11:36:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:51.543 11:36:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:51.543 11:36:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:51.543 11:36:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:51.543 11:36:22 -- scripts/common.sh@344 -- # : 1 00:08:51.543 11:36:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:51.543 11:36:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.543 11:36:22 -- scripts/common.sh@364 -- # decimal 1 00:08:51.543 11:36:22 -- scripts/common.sh@352 -- # local d=1 00:08:51.543 11:36:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.543 11:36:22 -- scripts/common.sh@354 -- # echo 1 00:08:51.543 11:36:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:51.543 11:36:22 -- scripts/common.sh@365 -- # decimal 2 00:08:51.543 11:36:22 -- scripts/common.sh@352 -- # local d=2 00:08:51.543 11:36:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.543 11:36:22 -- scripts/common.sh@354 -- # echo 2 00:08:51.543 11:36:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:51.543 11:36:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:51.543 11:36:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:51.543 11:36:22 -- scripts/common.sh@367 -- # return 0 00:08:51.543 11:36:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.543 11:36:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.543 --rc genhtml_branch_coverage=1 00:08:51.543 --rc genhtml_function_coverage=1 00:08:51.543 --rc genhtml_legend=1 00:08:51.543 --rc geninfo_all_blocks=1 00:08:51.543 --rc geninfo_unexecuted_blocks=1 00:08:51.543 00:08:51.543 ' 00:08:51.543 11:36:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.543 --rc genhtml_branch_coverage=1 00:08:51.543 --rc genhtml_function_coverage=1 00:08:51.543 --rc genhtml_legend=1 00:08:51.543 --rc geninfo_all_blocks=1 00:08:51.543 --rc geninfo_unexecuted_blocks=1 00:08:51.543 00:08:51.543 ' 00:08:51.543 11:36:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.543 --rc genhtml_branch_coverage=1 00:08:51.543 --rc genhtml_function_coverage=1 00:08:51.543 --rc genhtml_legend=1 00:08:51.543 --rc geninfo_all_blocks=1 00:08:51.543 --rc geninfo_unexecuted_blocks=1 00:08:51.543 00:08:51.543 ' 00:08:51.543 11:36:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:51.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.543 --rc genhtml_branch_coverage=1 00:08:51.543 --rc genhtml_function_coverage=1 00:08:51.543 --rc genhtml_legend=1 00:08:51.543 --rc geninfo_all_blocks=1 00:08:51.543 --rc geninfo_unexecuted_blocks=1 00:08:51.543 00:08:51.543 ' 00:08:51.543 11:36:22 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.543 11:36:22 -- nvmf/common.sh@7 -- # uname -s 00:08:51.543 11:36:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.543 11:36:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.543 11:36:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.543 11:36:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.543 11:36:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.543 11:36:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.543 11:36:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.543 11:36:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.543 11:36:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.543 11:36:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.543 11:36:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:51.543 11:36:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:51.543 11:36:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.544 11:36:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.544 11:36:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.544 11:36:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.544 11:36:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.544 11:36:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.544 11:36:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.544 11:36:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.544 11:36:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.544 11:36:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.544 11:36:22 -- paths/export.sh@5 -- # export PATH 00:08:51.544 11:36:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.544 11:36:22 -- nvmf/common.sh@46 -- # : 0 00:08:51.544 11:36:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.544 11:36:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.544 11:36:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.544 11:36:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.544 11:36:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.544 11:36:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.544 11:36:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.544 11:36:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.544 11:36:22 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.544 11:36:22 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.544 11:36:22 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:51.544 11:36:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:51.544 11:36:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.544 11:36:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:51.544 11:36:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:51.544 11:36:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:51.544 11:36:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.544 11:36:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.544 11:36:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.544 11:36:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:51.544 11:36:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:51.544 11:36:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:51.544 11:36:22 -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 11:36:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:58.123 11:36:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:58.123 11:36:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:58.123 11:36:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:58.123 11:36:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:58.123 11:36:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:58.123 11:36:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:58.123 11:36:28 -- nvmf/common.sh@294 -- # net_devs=() 00:08:58.123 11:36:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:58.123 11:36:28 -- nvmf/common.sh@295 -- # e810=() 00:08:58.123 11:36:28 -- nvmf/common.sh@295 -- # local -ga e810 00:08:58.123 11:36:28 -- nvmf/common.sh@296 -- # x722=() 00:08:58.123 11:36:28 -- nvmf/common.sh@296 -- # local -ga x722 00:08:58.123 11:36:28 -- nvmf/common.sh@297 -- # mlx=() 00:08:58.123 11:36:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:58.123 11:36:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.123 11:36:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:58.123 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:58.123 11:36:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.123 11:36:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:58.123 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:58.123 11:36:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.123 11:36:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.123 11:36:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.123 11:36:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:58.123 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:58.123 11:36:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.123 11:36:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.123 11:36:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:58.123 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:58.123 11:36:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.123 11:36:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:58.123 11:36:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:58.123 11:36:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:58.123 11:36:28 -- nvmf/common.sh@57 -- # uname 00:08:58.123 11:36:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:58.123 11:36:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:58.123 11:36:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:58.123 11:36:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:58.123 11:36:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:58.123 11:36:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:58.123 11:36:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:58.123 11:36:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:58.123 11:36:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:58.123 11:36:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:58.123 11:36:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:58.123 11:36:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.123 11:36:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:58.123 11:36:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:58.123 11:36:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.123 11:36:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:58.123 11:36:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.123 11:36:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:58.123 11:36:28 -- nvmf/common.sh@104 -- # continue 2 00:08:58.123 11:36:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:58.123 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@104 -- # continue 2 00:08:58.124 11:36:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:58.124 11:36:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:58.124 11:36:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:58.124 11:36:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:58.124 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.124 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:58.124 altname enp217s0f0np0 00:08:58.124 altname ens818f0np0 00:08:58.124 inet 192.168.100.8/24 scope global mlx_0_0 00:08:58.124 valid_lft forever preferred_lft forever 00:08:58.124 11:36:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:58.124 11:36:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:58.124 11:36:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:58.124 11:36:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:58.124 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.124 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:58.124 altname enp217s0f1np1 00:08:58.124 altname ens818f1np1 00:08:58.124 inet 192.168.100.9/24 scope global mlx_0_1 00:08:58.124 valid_lft forever preferred_lft forever 00:08:58.124 11:36:28 -- nvmf/common.sh@410 -- # return 0 00:08:58.124 11:36:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:58.124 11:36:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:58.124 11:36:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:58.124 11:36:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:58.124 11:36:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.124 11:36:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:58.124 11:36:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:58.124 11:36:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.124 11:36:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:58.124 11:36:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@104 -- # continue 2 00:08:58.124 11:36:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.124 11:36:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.124 11:36:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@104 -- # continue 2 00:08:58.124 11:36:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:58.124 11:36:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:58.124 11:36:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:58.124 11:36:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:58.124 11:36:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:58.124 11:36:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:58.124 192.168.100.9' 00:08:58.124 11:36:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:58.124 192.168.100.9' 00:08:58.124 11:36:28 -- nvmf/common.sh@445 -- # head -n 1 00:08:58.124 11:36:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:58.124 11:36:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:58.124 192.168.100.9' 00:08:58.124 11:36:28 -- nvmf/common.sh@446 -- # tail -n +2 00:08:58.124 11:36:28 -- nvmf/common.sh@446 -- # head -n 1 00:08:58.124 11:36:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:58.124 11:36:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:58.124 11:36:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:58.124 11:36:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:58.124 11:36:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:58.124 11:36:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:58.383 11:36:28 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:58.383 11:36:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:58.383 11:36:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.383 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.383 11:36:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.383 11:36:28 -- nvmf/common.sh@469 -- # nvmfpid=3621751 00:08:58.383 11:36:28 -- nvmf/common.sh@470 -- # waitforlisten 3621751 00:08:58.383 11:36:28 -- common/autotest_common.sh@829 -- # '[' -z 3621751 ']' 00:08:58.383 11:36:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.383 11:36:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.383 11:36:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.383 11:36:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.383 11:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.383 [2024-12-03 11:36:28.781791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.383 [2024-12-03 11:36:28.781838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.383 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.383 [2024-12-03 11:36:28.850350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.383 [2024-12-03 11:36:28.917596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:58.383 [2024-12-03 11:36:28.917709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.383 [2024-12-03 11:36:28.917718] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.383 [2024-12-03 11:36:28.917726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.383 [2024-12-03 11:36:28.917823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.383 [2024-12-03 11:36:28.917917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.383 [2024-12-03 11:36:28.917986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.383 [2024-12-03 11:36:28.917988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.317 11:36:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.317 11:36:29 -- common/autotest_common.sh@862 -- # return 0 00:08:59.317 11:36:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:59.317 11:36:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 11:36:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:59.317 11:36:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 [2024-12-03 11:36:29.649498] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:59.317 [2024-12-03 11:36:29.669980] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e1090/0x18e5580) succeed. 00:08:59.317 [2024-12-03 11:36:29.679139] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e2680/0x1926c20) succeed. 00:08:59.317 11:36:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:59.317 11:36:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 11:36:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.317 11:36:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 11:36:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.317 11:36:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 11:36:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:59.317 11:36:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.317 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.317 [2024-12-03 11:36:29.818838] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:59.317 11:36:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:59.317 11:36:29 -- target/connect_disconnect.sh@34 -- # set +x 00:09:02.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.737 11:41:45 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:14.737 11:41:45 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:14.737 11:41:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.737 11:41:45 -- nvmf/common.sh@116 -- # sync 00:14:14.737 11:41:45 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:14.737 11:41:45 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:14.737 11:41:45 -- nvmf/common.sh@119 -- # set +e 00:14:14.737 11:41:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.737 11:41:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:14.737 rmmod nvme_rdma 00:14:14.737 rmmod nvme_fabrics 00:14:14.737 11:41:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.737 11:41:45 -- nvmf/common.sh@123 -- # set -e 00:14:14.737 11:41:45 -- nvmf/common.sh@124 -- # return 0 00:14:14.737 11:41:45 -- nvmf/common.sh@477 -- # '[' -n 3621751 ']' 00:14:14.737 11:41:45 -- nvmf/common.sh@478 -- # killprocess 3621751 00:14:14.737 11:41:45 -- common/autotest_common.sh@936 -- # '[' -z 3621751 ']' 00:14:14.737 11:41:45 -- common/autotest_common.sh@940 -- # kill -0 3621751 00:14:14.737 11:41:45 -- common/autotest_common.sh@941 -- # uname 00:14:14.737 11:41:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.737 11:41:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3621751 00:14:14.738 11:41:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.738 11:41:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.738 11:41:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3621751' 00:14:14.738 killing process with pid 3621751 00:14:14.738 11:41:45 -- common/autotest_common.sh@955 -- # kill 3621751 00:14:14.738 11:41:45 -- common/autotest_common.sh@960 -- # wait 3621751 00:14:14.996 11:41:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.996 11:41:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:14.996 00:14:14.996 real 5m23.598s 00:14:14.996 user 21m3.260s 00:14:14.996 sys 0m17.919s 00:14:14.996 11:41:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.996 11:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:14.996 ************************************ 00:14:14.996 END TEST nvmf_connect_disconnect 00:14:14.996 ************************************ 00:14:14.997 11:41:45 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.997 11:41:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:14.997 11:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.997 11:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:14.997 ************************************ 00:14:14.997 START TEST nvmf_multitarget 00:14:14.997 ************************************ 00:14:14.997 11:41:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:15.256 * Looking for test storage... 00:14:15.256 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:15.256 11:41:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:15.256 11:41:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:15.256 11:41:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:15.256 11:41:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:15.256 11:41:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:15.256 11:41:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:15.256 11:41:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:15.256 11:41:45 -- scripts/common.sh@335 -- # IFS=.-: 00:14:15.256 11:41:45 -- scripts/common.sh@335 -- # read -ra ver1 00:14:15.256 11:41:45 -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.256 11:41:45 -- scripts/common.sh@336 -- # read -ra ver2 00:14:15.256 11:41:45 -- scripts/common.sh@337 -- # local 'op=<' 00:14:15.256 11:41:45 -- scripts/common.sh@339 -- # ver1_l=2 00:14:15.256 11:41:45 -- scripts/common.sh@340 -- # ver2_l=1 00:14:15.256 11:41:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:15.256 11:41:45 -- scripts/common.sh@343 -- # case "$op" in 00:14:15.256 11:41:45 -- scripts/common.sh@344 -- # : 1 00:14:15.256 11:41:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:15.256 11:41:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.256 11:41:45 -- scripts/common.sh@364 -- # decimal 1 00:14:15.256 11:41:45 -- scripts/common.sh@352 -- # local d=1 00:14:15.256 11:41:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.256 11:41:45 -- scripts/common.sh@354 -- # echo 1 00:14:15.256 11:41:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:15.256 11:41:45 -- scripts/common.sh@365 -- # decimal 2 00:14:15.256 11:41:45 -- scripts/common.sh@352 -- # local d=2 00:14:15.256 11:41:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.256 11:41:45 -- scripts/common.sh@354 -- # echo 2 00:14:15.256 11:41:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:15.256 11:41:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:15.256 11:41:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:15.256 11:41:45 -- scripts/common.sh@367 -- # return 0 00:14:15.256 11:41:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.256 11:41:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:15.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.256 --rc genhtml_branch_coverage=1 00:14:15.256 --rc genhtml_function_coverage=1 00:14:15.256 --rc genhtml_legend=1 00:14:15.256 --rc geninfo_all_blocks=1 00:14:15.256 --rc geninfo_unexecuted_blocks=1 00:14:15.256 00:14:15.256 ' 00:14:15.256 11:41:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:15.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.256 --rc genhtml_branch_coverage=1 00:14:15.256 --rc genhtml_function_coverage=1 00:14:15.256 --rc genhtml_legend=1 00:14:15.256 --rc geninfo_all_blocks=1 00:14:15.256 --rc geninfo_unexecuted_blocks=1 00:14:15.256 00:14:15.256 ' 00:14:15.256 11:41:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:15.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.256 --rc genhtml_branch_coverage=1 00:14:15.256 --rc genhtml_function_coverage=1 00:14:15.256 --rc genhtml_legend=1 00:14:15.256 --rc geninfo_all_blocks=1 00:14:15.256 --rc geninfo_unexecuted_blocks=1 00:14:15.256 00:14:15.256 ' 00:14:15.256 11:41:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:15.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.256 --rc genhtml_branch_coverage=1 00:14:15.256 --rc genhtml_function_coverage=1 00:14:15.256 --rc genhtml_legend=1 00:14:15.256 --rc geninfo_all_blocks=1 00:14:15.256 --rc geninfo_unexecuted_blocks=1 00:14:15.256 00:14:15.256 ' 00:14:15.257 11:41:45 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.257 11:41:45 -- nvmf/common.sh@7 -- # uname -s 00:14:15.257 11:41:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.257 11:41:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.257 11:41:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.257 11:41:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.257 11:41:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.257 11:41:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.257 11:41:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.257 11:41:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.257 11:41:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.257 11:41:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.257 11:41:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:15.257 11:41:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:15.257 11:41:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.257 11:41:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.257 11:41:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.257 11:41:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:15.257 11:41:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.257 11:41:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.257 11:41:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.257 11:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.257 11:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.257 11:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.257 11:41:45 -- paths/export.sh@5 -- # export PATH 00:14:15.257 11:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.257 11:41:45 -- nvmf/common.sh@46 -- # : 0 00:14:15.257 11:41:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.257 11:41:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.257 11:41:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.257 11:41:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.257 11:41:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.257 11:41:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.257 11:41:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.257 11:41:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.257 11:41:45 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:15.257 11:41:45 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:15.257 11:41:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:15.257 11:41:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.257 11:41:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.257 11:41:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.257 11:41:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.257 11:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.257 11:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.257 11:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.257 11:41:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:15.257 11:41:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:15.257 11:41:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:15.257 11:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:21.827 11:41:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.827 11:41:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.827 11:41:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.827 11:41:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.827 11:41:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.827 11:41:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.828 11:41:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.828 11:41:52 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.828 11:41:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.828 11:41:52 -- nvmf/common.sh@295 -- # e810=() 00:14:21.828 11:41:52 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.828 11:41:52 -- nvmf/common.sh@296 -- # x722=() 00:14:21.828 11:41:52 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.828 11:41:52 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.828 11:41:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.828 11:41:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.828 11:41:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:21.828 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:21.828 11:41:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.828 11:41:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:21.828 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:21.828 11:41:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.828 11:41:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.828 11:41:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.828 11:41:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:21.828 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:21.828 11:41:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.828 11:41:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.828 11:41:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:21.828 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:21.828 11:41:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.828 11:41:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.828 11:41:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:21.828 11:41:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:21.828 11:41:52 -- nvmf/common.sh@57 -- # uname 00:14:21.828 11:41:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:21.828 11:41:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:21.828 11:41:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:21.828 11:41:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:21.828 11:41:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:21.828 11:41:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:21.828 11:41:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:21.828 11:41:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:21.828 11:41:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:21.828 11:41:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.828 11:41:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:21.828 11:41:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.828 11:41:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:21.828 11:41:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:21.828 11:41:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.828 11:41:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:21.828 11:41:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:21.828 11:41:52 -- nvmf/common.sh@104 -- # continue 2 00:14:21.828 11:41:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.828 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:21.828 11:41:52 -- nvmf/common.sh@104 -- # continue 2 00:14:21.828 11:41:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.828 11:41:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:21.828 11:41:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.828 11:41:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:21.828 11:41:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:21.828 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.828 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:21.828 altname enp217s0f0np0 00:14:21.828 altname ens818f0np0 00:14:21.828 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.828 valid_lft forever preferred_lft forever 00:14:21.828 11:41:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.828 11:41:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:21.828 11:41:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.828 11:41:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.828 11:41:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:21.828 11:41:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:21.828 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.828 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:21.828 altname enp217s0f1np1 00:14:21.828 altname ens818f1np1 00:14:21.828 inet 192.168.100.9/24 scope global mlx_0_1 00:14:21.828 valid_lft forever preferred_lft forever 00:14:21.828 11:41:52 -- nvmf/common.sh@410 -- # return 0 00:14:21.828 11:41:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.828 11:41:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:21.828 11:41:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:21.828 11:41:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:21.828 11:41:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:21.828 11:41:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.828 11:41:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:21.828 11:41:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:21.828 11:41:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.087 11:41:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:22.087 11:41:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.087 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.087 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.087 11:41:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:22.087 11:41:52 -- nvmf/common.sh@104 -- # continue 2 00:14:22.087 11:41:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.087 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.087 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.087 11:41:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.087 11:41:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.087 11:41:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:22.087 11:41:52 -- nvmf/common.sh@104 -- # continue 2 00:14:22.087 11:41:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.087 11:41:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:22.087 11:41:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.087 11:41:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.087 11:41:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:22.087 11:41:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.087 11:41:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.087 11:41:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:22.087 192.168.100.9' 00:14:22.087 11:41:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:22.087 192.168.100.9' 00:14:22.087 11:41:52 -- nvmf/common.sh@445 -- # head -n 1 00:14:22.087 11:41:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:22.087 11:41:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:22.087 192.168.100.9' 00:14:22.087 11:41:52 -- nvmf/common.sh@446 -- # tail -n +2 00:14:22.087 11:41:52 -- nvmf/common.sh@446 -- # head -n 1 00:14:22.087 11:41:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:22.087 11:41:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:22.087 11:41:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:22.087 11:41:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:22.087 11:41:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:22.087 11:41:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:22.087 11:41:52 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:22.087 11:41:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:22.087 11:41:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.088 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:22.088 11:41:52 -- nvmf/common.sh@469 -- # nvmfpid=3681802 00:14:22.088 11:41:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.088 11:41:52 -- nvmf/common.sh@470 -- # waitforlisten 3681802 00:14:22.088 11:41:52 -- common/autotest_common.sh@829 -- # '[' -z 3681802 ']' 00:14:22.088 11:41:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.088 11:41:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.088 11:41:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.088 11:41:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.088 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:22.088 [2024-12-03 11:41:52.568377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:22.088 [2024-12-03 11:41:52.568427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.088 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.088 [2024-12-03 11:41:52.638082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.369 [2024-12-03 11:41:52.712695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:22.369 [2024-12-03 11:41:52.712797] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.369 [2024-12-03 11:41:52.712807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.369 [2024-12-03 11:41:52.712815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.369 [2024-12-03 11:41:52.712864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.369 [2024-12-03 11:41:52.712958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.369 [2024-12-03 11:41:52.713045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.369 [2024-12-03 11:41:52.713047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.995 11:41:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.995 11:41:53 -- common/autotest_common.sh@862 -- # return 0 00:14:22.995 11:41:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:22.995 11:41:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.995 11:41:53 -- common/autotest_common.sh@10 -- # set +x 00:14:22.995 11:41:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.995 11:41:53 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:22.995 11:41:53 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:22.995 11:41:53 -- target/multitarget.sh@21 -- # jq length 00:14:22.995 11:41:53 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:22.995 11:41:53 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:23.253 "nvmf_tgt_1" 00:14:23.253 11:41:53 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:23.253 "nvmf_tgt_2" 00:14:23.253 11:41:53 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.253 11:41:53 -- target/multitarget.sh@28 -- # jq length 00:14:23.512 11:41:53 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:23.512 11:41:53 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:23.512 true 00:14:23.512 11:41:53 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:23.512 true 00:14:23.512 11:41:54 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.512 11:41:54 -- target/multitarget.sh@35 -- # jq length 00:14:23.771 11:41:54 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:23.771 11:41:54 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:23.771 11:41:54 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:23.771 11:41:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:23.771 11:41:54 -- nvmf/common.sh@116 -- # sync 00:14:23.771 11:41:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:23.771 11:41:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:23.771 11:41:54 -- nvmf/common.sh@119 -- # set +e 00:14:23.771 11:41:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:23.771 11:41:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:23.771 rmmod nvme_rdma 00:14:23.771 rmmod nvme_fabrics 00:14:23.771 11:41:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:23.771 11:41:54 -- nvmf/common.sh@123 -- # set -e 00:14:23.771 11:41:54 -- nvmf/common.sh@124 -- # return 0 00:14:23.771 11:41:54 -- nvmf/common.sh@477 -- # '[' -n 3681802 ']' 00:14:23.771 11:41:54 -- nvmf/common.sh@478 -- # killprocess 3681802 00:14:23.771 11:41:54 -- common/autotest_common.sh@936 -- # '[' -z 3681802 ']' 00:14:23.771 11:41:54 -- common/autotest_common.sh@940 -- # kill -0 3681802 00:14:23.771 11:41:54 -- common/autotest_common.sh@941 -- # uname 00:14:23.771 11:41:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.771 11:41:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3681802 00:14:23.771 11:41:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:23.771 11:41:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:23.771 11:41:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3681802' 00:14:23.771 killing process with pid 3681802 00:14:23.771 11:41:54 -- common/autotest_common.sh@955 -- # kill 3681802 00:14:23.771 11:41:54 -- common/autotest_common.sh@960 -- # wait 3681802 00:14:24.030 11:41:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:24.030 11:41:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:24.030 00:14:24.030 real 0m8.919s 00:14:24.030 user 0m9.822s 00:14:24.030 sys 0m5.647s 00:14:24.030 11:41:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:24.030 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.030 ************************************ 00:14:24.030 END TEST nvmf_multitarget 00:14:24.030 ************************************ 00:14:24.030 11:41:54 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:24.030 11:41:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:24.030 11:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:24.030 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.030 ************************************ 00:14:24.030 START TEST nvmf_rpc 00:14:24.030 ************************************ 00:14:24.030 11:41:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:24.290 * Looking for test storage... 00:14:24.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:24.290 11:41:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:24.290 11:41:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:24.290 11:41:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:24.290 11:41:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:24.290 11:41:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:24.290 11:41:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:24.290 11:41:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:24.290 11:41:54 -- scripts/common.sh@335 -- # IFS=.-: 00:14:24.290 11:41:54 -- scripts/common.sh@335 -- # read -ra ver1 00:14:24.290 11:41:54 -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.290 11:41:54 -- scripts/common.sh@336 -- # read -ra ver2 00:14:24.290 11:41:54 -- scripts/common.sh@337 -- # local 'op=<' 00:14:24.290 11:41:54 -- scripts/common.sh@339 -- # ver1_l=2 00:14:24.290 11:41:54 -- scripts/common.sh@340 -- # ver2_l=1 00:14:24.290 11:41:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:24.290 11:41:54 -- scripts/common.sh@343 -- # case "$op" in 00:14:24.290 11:41:54 -- scripts/common.sh@344 -- # : 1 00:14:24.290 11:41:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:24.290 11:41:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.290 11:41:54 -- scripts/common.sh@364 -- # decimal 1 00:14:24.290 11:41:54 -- scripts/common.sh@352 -- # local d=1 00:14:24.290 11:41:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.290 11:41:54 -- scripts/common.sh@354 -- # echo 1 00:14:24.290 11:41:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:24.290 11:41:54 -- scripts/common.sh@365 -- # decimal 2 00:14:24.291 11:41:54 -- scripts/common.sh@352 -- # local d=2 00:14:24.291 11:41:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.291 11:41:54 -- scripts/common.sh@354 -- # echo 2 00:14:24.291 11:41:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:24.291 11:41:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:24.291 11:41:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:24.291 11:41:54 -- scripts/common.sh@367 -- # return 0 00:14:24.291 11:41:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.291 11:41:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.291 --rc genhtml_branch_coverage=1 00:14:24.291 --rc genhtml_function_coverage=1 00:14:24.291 --rc genhtml_legend=1 00:14:24.291 --rc geninfo_all_blocks=1 00:14:24.291 --rc geninfo_unexecuted_blocks=1 00:14:24.291 00:14:24.291 ' 00:14:24.291 11:41:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.291 --rc genhtml_branch_coverage=1 00:14:24.291 --rc genhtml_function_coverage=1 00:14:24.291 --rc genhtml_legend=1 00:14:24.291 --rc geninfo_all_blocks=1 00:14:24.291 --rc geninfo_unexecuted_blocks=1 00:14:24.291 00:14:24.291 ' 00:14:24.291 11:41:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.291 --rc genhtml_branch_coverage=1 00:14:24.291 --rc genhtml_function_coverage=1 00:14:24.291 --rc genhtml_legend=1 00:14:24.291 --rc geninfo_all_blocks=1 00:14:24.291 --rc geninfo_unexecuted_blocks=1 00:14:24.291 00:14:24.291 ' 00:14:24.291 11:41:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.291 --rc genhtml_branch_coverage=1 00:14:24.291 --rc genhtml_function_coverage=1 00:14:24.291 --rc genhtml_legend=1 00:14:24.291 --rc geninfo_all_blocks=1 00:14:24.291 --rc geninfo_unexecuted_blocks=1 00:14:24.291 00:14:24.291 ' 00:14:24.291 11:41:54 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.291 11:41:54 -- nvmf/common.sh@7 -- # uname -s 00:14:24.291 11:41:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.291 11:41:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.291 11:41:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.291 11:41:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.291 11:41:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.291 11:41:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.291 11:41:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.291 11:41:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.291 11:41:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.291 11:41:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.291 11:41:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:24.291 11:41:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:24.291 11:41:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.291 11:41:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.291 11:41:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.291 11:41:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:24.291 11:41:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.291 11:41:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.291 11:41:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.291 11:41:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.291 11:41:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.291 11:41:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.291 11:41:54 -- paths/export.sh@5 -- # export PATH 00:14:24.291 11:41:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.291 11:41:54 -- nvmf/common.sh@46 -- # : 0 00:14:24.291 11:41:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:24.291 11:41:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:24.291 11:41:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:24.291 11:41:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.291 11:41:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.291 11:41:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:24.291 11:41:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:24.291 11:41:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:24.291 11:41:54 -- target/rpc.sh@11 -- # loops=5 00:14:24.291 11:41:54 -- target/rpc.sh@23 -- # nvmftestinit 00:14:24.291 11:41:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:24.291 11:41:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.291 11:41:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:24.291 11:41:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:24.291 11:41:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:24.291 11:41:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.291 11:41:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.291 11:41:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.291 11:41:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:24.291 11:41:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:24.291 11:41:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:24.291 11:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.860 11:42:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.860 11:42:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.860 11:42:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.860 11:42:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.860 11:42:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.860 11:42:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.860 11:42:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.860 11:42:01 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.860 11:42:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.860 11:42:01 -- nvmf/common.sh@295 -- # e810=() 00:14:30.860 11:42:01 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.860 11:42:01 -- nvmf/common.sh@296 -- # x722=() 00:14:30.860 11:42:01 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.860 11:42:01 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.860 11:42:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.860 11:42:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.860 11:42:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.860 11:42:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.860 11:42:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:30.860 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:30.860 11:42:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.860 11:42:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.860 11:42:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:30.860 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:30.860 11:42:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.860 11:42:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.860 11:42:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.860 11:42:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.860 11:42:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.860 11:42:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.860 11:42:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:30.860 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:30.860 11:42:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.860 11:42:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.860 11:42:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.860 11:42:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.860 11:42:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:30.860 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:30.860 11:42:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.860 11:42:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.860 11:42:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:30.860 11:42:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:30.860 11:42:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:30.860 11:42:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:30.860 11:42:01 -- nvmf/common.sh@57 -- # uname 00:14:30.860 11:42:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:30.860 11:42:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:31.120 11:42:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:31.120 11:42:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:31.120 11:42:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:31.120 11:42:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:31.120 11:42:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:31.120 11:42:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:31.120 11:42:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:31.120 11:42:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:31.120 11:42:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:31.120 11:42:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.120 11:42:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:31.120 11:42:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:31.120 11:42:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.120 11:42:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:31.120 11:42:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@104 -- # continue 2 00:14:31.120 11:42:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@104 -- # continue 2 00:14:31.120 11:42:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:31.120 11:42:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:31.120 11:42:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:31.120 11:42:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:31.120 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.120 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:31.120 altname enp217s0f0np0 00:14:31.120 altname ens818f0np0 00:14:31.120 inet 192.168.100.8/24 scope global mlx_0_0 00:14:31.120 valid_lft forever preferred_lft forever 00:14:31.120 11:42:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:31.120 11:42:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:31.120 11:42:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:31.120 11:42:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:31.120 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.120 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:31.120 altname enp217s0f1np1 00:14:31.120 altname ens818f1np1 00:14:31.120 inet 192.168.100.9/24 scope global mlx_0_1 00:14:31.120 valid_lft forever preferred_lft forever 00:14:31.120 11:42:01 -- nvmf/common.sh@410 -- # return 0 00:14:31.120 11:42:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.120 11:42:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:31.120 11:42:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:31.120 11:42:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:31.120 11:42:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.120 11:42:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:31.120 11:42:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:31.120 11:42:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.120 11:42:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:31.120 11:42:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@104 -- # continue 2 00:14:31.120 11:42:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.120 11:42:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.120 11:42:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@104 -- # continue 2 00:14:31.120 11:42:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:31.120 11:42:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:31.120 11:42:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:31.120 11:42:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:31.120 11:42:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:31.120 11:42:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:31.120 192.168.100.9' 00:14:31.120 11:42:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:31.120 192.168.100.9' 00:14:31.120 11:42:01 -- nvmf/common.sh@445 -- # head -n 1 00:14:31.120 11:42:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:31.120 11:42:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:31.120 192.168.100.9' 00:14:31.120 11:42:01 -- nvmf/common.sh@446 -- # tail -n +2 00:14:31.120 11:42:01 -- nvmf/common.sh@446 -- # head -n 1 00:14:31.120 11:42:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:31.120 11:42:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:31.120 11:42:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:31.120 11:42:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:31.120 11:42:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:31.120 11:42:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:31.120 11:42:01 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:31.120 11:42:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.120 11:42:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.120 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:14:31.120 11:42:01 -- nvmf/common.sh@469 -- # nvmfpid=3685633 00:14:31.120 11:42:01 -- nvmf/common.sh@470 -- # waitforlisten 3685633 00:14:31.120 11:42:01 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.120 11:42:01 -- common/autotest_common.sh@829 -- # '[' -z 3685633 ']' 00:14:31.120 11:42:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.120 11:42:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.120 11:42:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.120 11:42:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.120 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:14:31.379 [2024-12-03 11:42:01.743456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.379 [2024-12-03 11:42:01.743510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.379 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.379 [2024-12-03 11:42:01.814213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.379 [2024-12-03 11:42:01.888429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.379 [2024-12-03 11:42:01.888535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.379 [2024-12-03 11:42:01.888545] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.379 [2024-12-03 11:42:01.888554] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.379 [2024-12-03 11:42:01.888600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.379 [2024-12-03 11:42:01.888694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.379 [2024-12-03 11:42:01.888777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.379 [2024-12-03 11:42:01.888779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.315 11:42:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.315 11:42:02 -- common/autotest_common.sh@862 -- # return 0 00:14:32.315 11:42:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.315 11:42:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.315 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.315 11:42:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.315 11:42:02 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:32.315 11:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.315 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.315 11:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.315 11:42:02 -- target/rpc.sh@26 -- # stats='{ 00:14:32.315 "tick_rate": 2500000000, 00:14:32.315 "poll_groups": [ 00:14:32.315 { 00:14:32.315 "name": "nvmf_tgt_poll_group_0", 00:14:32.315 "admin_qpairs": 0, 00:14:32.315 "io_qpairs": 0, 00:14:32.315 "current_admin_qpairs": 0, 00:14:32.315 "current_io_qpairs": 0, 00:14:32.315 "pending_bdev_io": 0, 00:14:32.315 "completed_nvme_io": 0, 00:14:32.315 "transports": [] 00:14:32.315 }, 00:14:32.315 { 00:14:32.315 "name": "nvmf_tgt_poll_group_1", 00:14:32.315 "admin_qpairs": 0, 00:14:32.315 "io_qpairs": 0, 00:14:32.315 "current_admin_qpairs": 0, 00:14:32.315 "current_io_qpairs": 0, 00:14:32.315 "pending_bdev_io": 0, 00:14:32.315 "completed_nvme_io": 0, 00:14:32.315 "transports": [] 00:14:32.315 }, 00:14:32.315 { 00:14:32.315 "name": "nvmf_tgt_poll_group_2", 00:14:32.315 "admin_qpairs": 0, 00:14:32.315 "io_qpairs": 0, 00:14:32.315 "current_admin_qpairs": 0, 00:14:32.315 "current_io_qpairs": 0, 00:14:32.315 "pending_bdev_io": 0, 00:14:32.315 "completed_nvme_io": 0, 00:14:32.315 "transports": [] 00:14:32.315 }, 00:14:32.315 { 00:14:32.315 "name": "nvmf_tgt_poll_group_3", 00:14:32.315 "admin_qpairs": 0, 00:14:32.315 "io_qpairs": 0, 00:14:32.315 "current_admin_qpairs": 0, 00:14:32.315 "current_io_qpairs": 0, 00:14:32.315 "pending_bdev_io": 0, 00:14:32.315 "completed_nvme_io": 0, 00:14:32.315 "transports": [] 00:14:32.315 } 00:14:32.315 ] 00:14:32.315 }' 00:14:32.315 11:42:02 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:32.315 11:42:02 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:32.315 11:42:02 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:32.315 11:42:02 -- target/rpc.sh@15 -- # wc -l 00:14:32.315 11:42:02 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:32.315 11:42:02 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:32.315 11:42:02 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:32.315 11:42:02 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:32.316 11:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.316 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.316 [2024-12-03 11:42:02.748892] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xff70a0/0xffb590) succeed. 00:14:32.316 [2024-12-03 11:42:02.758028] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xff8690/0x103cc30) succeed. 00:14:32.316 11:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.316 11:42:02 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:32.316 11:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.316 11:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.316 11:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.316 11:42:02 -- target/rpc.sh@33 -- # stats='{ 00:14:32.316 "tick_rate": 2500000000, 00:14:32.316 "poll_groups": [ 00:14:32.316 { 00:14:32.316 "name": "nvmf_tgt_poll_group_0", 00:14:32.316 "admin_qpairs": 0, 00:14:32.316 "io_qpairs": 0, 00:14:32.316 "current_admin_qpairs": 0, 00:14:32.316 "current_io_qpairs": 0, 00:14:32.316 "pending_bdev_io": 0, 00:14:32.316 "completed_nvme_io": 0, 00:14:32.316 "transports": [ 00:14:32.316 { 00:14:32.316 "trtype": "RDMA", 00:14:32.316 "pending_data_buffer": 0, 00:14:32.316 "devices": [ 00:14:32.316 { 00:14:32.316 "name": "mlx5_0", 00:14:32.316 "polls": 15850, 00:14:32.316 "idle_polls": 15850, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "mlx5_1", 00:14:32.316 "polls": 15850, 00:14:32.316 "idle_polls": 15850, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "nvmf_tgt_poll_group_1", 00:14:32.316 "admin_qpairs": 0, 00:14:32.316 "io_qpairs": 0, 00:14:32.316 "current_admin_qpairs": 0, 00:14:32.316 "current_io_qpairs": 0, 00:14:32.316 "pending_bdev_io": 0, 00:14:32.316 "completed_nvme_io": 0, 00:14:32.316 "transports": [ 00:14:32.316 { 00:14:32.316 "trtype": "RDMA", 00:14:32.316 "pending_data_buffer": 0, 00:14:32.316 "devices": [ 00:14:32.316 { 00:14:32.316 "name": "mlx5_0", 00:14:32.316 "polls": 10060, 00:14:32.316 "idle_polls": 10060, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "mlx5_1", 00:14:32.316 "polls": 10060, 00:14:32.316 "idle_polls": 10060, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "nvmf_tgt_poll_group_2", 00:14:32.316 "admin_qpairs": 0, 00:14:32.316 "io_qpairs": 0, 00:14:32.316 "current_admin_qpairs": 0, 00:14:32.316 "current_io_qpairs": 0, 00:14:32.316 "pending_bdev_io": 0, 00:14:32.316 "completed_nvme_io": 0, 00:14:32.316 "transports": [ 00:14:32.316 { 00:14:32.316 "trtype": "RDMA", 00:14:32.316 "pending_data_buffer": 0, 00:14:32.316 "devices": [ 00:14:32.316 { 00:14:32.316 "name": "mlx5_0", 00:14:32.316 "polls": 5639, 00:14:32.316 "idle_polls": 5639, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "mlx5_1", 00:14:32.316 "polls": 5639, 00:14:32.316 "idle_polls": 5639, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "nvmf_tgt_poll_group_3", 00:14:32.316 "admin_qpairs": 0, 00:14:32.316 "io_qpairs": 0, 00:14:32.316 "current_admin_qpairs": 0, 00:14:32.316 "current_io_qpairs": 0, 00:14:32.316 "pending_bdev_io": 0, 00:14:32.316 "completed_nvme_io": 0, 00:14:32.316 "transports": [ 00:14:32.316 { 00:14:32.316 "trtype": "RDMA", 00:14:32.316 "pending_data_buffer": 0, 00:14:32.316 "devices": [ 00:14:32.316 { 00:14:32.316 "name": "mlx5_0", 00:14:32.316 "polls": 895, 00:14:32.316 "idle_polls": 895, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 }, 00:14:32.316 { 00:14:32.316 "name": "mlx5_1", 00:14:32.316 "polls": 895, 00:14:32.316 "idle_polls": 895, 00:14:32.316 "completions": 0, 00:14:32.316 "requests": 0, 00:14:32.316 "request_latency": 0, 00:14:32.316 "pending_free_request": 0, 00:14:32.316 "pending_rdma_read": 0, 00:14:32.316 "pending_rdma_write": 0, 00:14:32.316 "pending_rdma_send": 0, 00:14:32.316 "total_send_wrs": 0, 00:14:32.316 "send_doorbell_updates": 0, 00:14:32.316 "total_recv_wrs": 4096, 00:14:32.316 "recv_doorbell_updates": 1 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 } 00:14:32.316 ] 00:14:32.316 }' 00:14:32.316 11:42:02 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:32.316 11:42:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:32.316 11:42:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:32.316 11:42:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.575 11:42:02 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:32.575 11:42:02 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:32.575 11:42:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:32.575 11:42:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:32.575 11:42:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.575 11:42:03 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:32.575 11:42:03 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:32.575 11:42:03 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:32.575 11:42:03 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:32.575 11:42:03 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:32.575 11:42:03 -- target/rpc.sh@15 -- # wc -l 00:14:32.575 11:42:03 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:32.575 11:42:03 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:32.575 11:42:03 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:32.575 11:42:03 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:32.575 11:42:03 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:32.575 11:42:03 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:32.575 11:42:03 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:32.575 11:42:03 -- target/rpc.sh@15 -- # wc -l 00:14:32.575 11:42:03 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:32.575 11:42:03 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:32.575 11:42:03 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:32.575 11:42:03 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:32.575 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.575 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.575 Malloc1 00:14:32.575 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.575 11:42:03 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.575 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.575 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.575 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.575 11:42:03 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.575 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.575 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.575 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.575 11:42:03 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:32.575 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.576 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.834 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.834 11:42:03 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:32.834 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.834 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.834 [2024-12-03 11:42:03.196620] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:32.834 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.834 11:42:03 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:32.834 11:42:03 -- common/autotest_common.sh@650 -- # local es=0 00:14:32.834 11:42:03 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:32.834 11:42:03 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:32.834 11:42:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.834 11:42:03 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:32.834 11:42:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.834 11:42:03 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:32.834 11:42:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.834 11:42:03 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:32.834 11:42:03 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:32.834 11:42:03 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:32.834 [2024-12-03 11:42:03.248549] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:32.834 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:32.834 could not add new controller: failed to write to nvme-fabrics device 00:14:32.834 11:42:03 -- common/autotest_common.sh@653 -- # es=1 00:14:32.834 11:42:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.834 11:42:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.834 11:42:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.834 11:42:03 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:32.834 11:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.834 11:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.834 11:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.834 11:42:03 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:33.768 11:42:04 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:33.768 11:42:04 -- common/autotest_common.sh@1187 -- # local i=0 00:14:33.768 11:42:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.768 11:42:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:33.768 11:42:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:35.682 11:42:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:35.682 11:42:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:35.682 11:42:06 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.940 11:42:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:35.940 11:42:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.940 11:42:06 -- common/autotest_common.sh@1197 -- # return 0 00:14:35.940 11:42:06 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.872 11:42:07 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.872 11:42:07 -- common/autotest_common.sh@1208 -- # local i=0 00:14:36.872 11:42:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:36.872 11:42:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.872 11:42:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:36.872 11:42:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.872 11:42:07 -- common/autotest_common.sh@1220 -- # return 0 00:14:36.872 11:42:07 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:36.872 11:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.872 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:14:36.872 11:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.872 11:42:07 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.872 11:42:07 -- common/autotest_common.sh@650 -- # local es=0 00:14:36.872 11:42:07 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.872 11:42:07 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:36.872 11:42:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.872 11:42:07 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:36.872 11:42:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.872 11:42:07 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:36.872 11:42:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.872 11:42:07 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:36.872 11:42:07 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:36.872 11:42:07 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.872 [2024-12-03 11:42:07.370739] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:36.872 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:36.872 could not add new controller: failed to write to nvme-fabrics device 00:14:36.872 11:42:07 -- common/autotest_common.sh@653 -- # es=1 00:14:36.872 11:42:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.872 11:42:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.872 11:42:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.872 11:42:07 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:36.872 11:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.872 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:14:36.872 11:42:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.872 11:42:07 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:37.805 11:42:08 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:37.805 11:42:08 -- common/autotest_common.sh@1187 -- # local i=0 00:14:37.805 11:42:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.805 11:42:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:37.805 11:42:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:40.331 11:42:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:40.331 11:42:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:40.331 11:42:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.331 11:42:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:40.331 11:42:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.331 11:42:10 -- common/autotest_common.sh@1197 -- # return 0 00:14:40.331 11:42:10 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.897 11:42:11 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.897 11:42:11 -- common/autotest_common.sh@1208 -- # local i=0 00:14:40.897 11:42:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:40.897 11:42:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.897 11:42:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:40.897 11:42:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.897 11:42:11 -- common/autotest_common.sh@1220 -- # return 0 00:14:40.897 11:42:11 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.897 11:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.897 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.897 11:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.897 11:42:11 -- target/rpc.sh@81 -- # seq 1 5 00:14:40.897 11:42:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:40.897 11:42:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:40.897 11:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.897 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.897 11:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.897 11:42:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:40.897 11:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.897 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.897 [2024-12-03 11:42:11.461255] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:40.898 11:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.898 11:42:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:40.898 11:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.898 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.898 11:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.898 11:42:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:40.898 11:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.898 11:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.898 11:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.898 11:42:11 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:42.272 11:42:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.272 11:42:12 -- common/autotest_common.sh@1187 -- # local i=0 00:14:42.272 11:42:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.272 11:42:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:42.272 11:42:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:44.167 11:42:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:44.167 11:42:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:44.167 11:42:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.167 11:42:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:44.167 11:42:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.167 11:42:14 -- common/autotest_common.sh@1197 -- # return 0 00:14:44.167 11:42:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.097 11:42:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.097 11:42:15 -- common/autotest_common.sh@1208 -- # local i=0 00:14:45.097 11:42:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:45.097 11:42:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.097 11:42:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:45.097 11:42:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.097 11:42:15 -- common/autotest_common.sh@1220 -- # return 0 00:14:45.097 11:42:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:45.097 11:42:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 [2024-12-03 11:42:15.524430] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.097 11:42:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.097 11:42:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.097 11:42:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.097 11:42:15 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:46.026 11:42:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.026 11:42:16 -- common/autotest_common.sh@1187 -- # local i=0 00:14:46.026 11:42:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.026 11:42:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:46.026 11:42:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:48.549 11:42:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:48.549 11:42:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:48.549 11:42:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.549 11:42:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:48.549 11:42:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.549 11:42:18 -- common/autotest_common.sh@1197 -- # return 0 00:14:48.549 11:42:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.189 11:42:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.189 11:42:19 -- common/autotest_common.sh@1208 -- # local i=0 00:14:49.189 11:42:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:49.189 11:42:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.189 11:42:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:49.189 11:42:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.189 11:42:19 -- common/autotest_common.sh@1220 -- # return 0 00:14:49.189 11:42:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:49.189 11:42:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 [2024-12-03 11:42:19.587321] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:49.189 11:42:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.189 11:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.189 11:42:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.189 11:42:19 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:50.125 11:42:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.125 11:42:20 -- common/autotest_common.sh@1187 -- # local i=0 00:14:50.125 11:42:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.125 11:42:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:50.125 11:42:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:52.024 11:42:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:52.024 11:42:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:52.024 11:42:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.024 11:42:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:52.024 11:42:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.024 11:42:22 -- common/autotest_common.sh@1197 -- # return 0 00:14:52.024 11:42:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.958 11:42:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.958 11:42:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:52.958 11:42:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:52.958 11:42:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.217 11:42:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:53.217 11:42:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.217 11:42:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:53.217 11:42:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:53.217 11:42:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 [2024-12-03 11:42:23.634270] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.217 11:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.217 11:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.217 11:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.217 11:42:23 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:54.151 11:42:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.151 11:42:24 -- common/autotest_common.sh@1187 -- # local i=0 00:14:54.151 11:42:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.151 11:42:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:54.151 11:42:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:56.049 11:42:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:56.049 11:42:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:56.049 11:42:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.307 11:42:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:56.307 11:42:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.307 11:42:26 -- common/autotest_common.sh@1197 -- # return 0 00:14:56.307 11:42:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.243 11:42:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.243 11:42:27 -- common/autotest_common.sh@1208 -- # local i=0 00:14:57.243 11:42:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:57.243 11:42:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.243 11:42:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:57.243 11:42:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.243 11:42:27 -- common/autotest_common.sh@1220 -- # return 0 00:14:57.243 11:42:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:57.243 11:42:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 [2024-12-03 11:42:27.655935] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:57.243 11:42:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.243 11:42:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.243 11:42:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.243 11:42:27 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:58.178 11:42:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:58.178 11:42:28 -- common/autotest_common.sh@1187 -- # local i=0 00:14:58.178 11:42:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.178 11:42:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:58.178 11:42:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:00.079 11:42:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:00.079 11:42:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:00.079 11:42:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.079 11:42:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:00.079 11:42:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.079 11:42:30 -- common/autotest_common.sh@1197 -- # return 0 00:15:00.079 11:42:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.454 11:42:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@1208 -- # local i=0 00:15:01.454 11:42:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:01.454 11:42:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:01.454 11:42:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@1220 -- # return 0 00:15:01.454 11:42:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@99 -- # seq 1 5 00:15:01.454 11:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:01.454 11:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 [2024-12-03 11:42:31.716045] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:01.454 11:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 [2024-12-03 11:42:31.768240] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:01.454 11:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 [2024-12-03 11:42:31.816406] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:01.454 11:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 [2024-12-03 11:42:31.864561] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.454 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.454 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.454 11:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.454 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:01.455 11:42:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 [2024-12-03 11:42:31.916741] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:31 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:01.455 11:42:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.455 11:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 11:42:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.455 11:42:32 -- target/rpc.sh@110 -- # stats='{ 00:15:01.455 "tick_rate": 2500000000, 00:15:01.455 "poll_groups": [ 00:15:01.455 { 00:15:01.455 "name": "nvmf_tgt_poll_group_0", 00:15:01.455 "admin_qpairs": 2, 00:15:01.455 "io_qpairs": 27, 00:15:01.455 "current_admin_qpairs": 0, 00:15:01.455 "current_io_qpairs": 0, 00:15:01.455 "pending_bdev_io": 0, 00:15:01.455 "completed_nvme_io": 78, 00:15:01.455 "transports": [ 00:15:01.455 { 00:15:01.455 "trtype": "RDMA", 00:15:01.455 "pending_data_buffer": 0, 00:15:01.455 "devices": [ 00:15:01.455 { 00:15:01.455 "name": "mlx5_0", 00:15:01.455 "polls": 3455168, 00:15:01.455 "idle_polls": 3454921, 00:15:01.455 "completions": 267, 00:15:01.455 "requests": 133, 00:15:01.455 "request_latency": 22290496, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 211, 00:15:01.455 "send_doorbell_updates": 121, 00:15:01.455 "total_recv_wrs": 4229, 00:15:01.455 "recv_doorbell_updates": 121 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "mlx5_1", 00:15:01.455 "polls": 3455168, 00:15:01.455 "idle_polls": 3455168, 00:15:01.455 "completions": 0, 00:15:01.455 "requests": 0, 00:15:01.455 "request_latency": 0, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 0, 00:15:01.455 "send_doorbell_updates": 0, 00:15:01.455 "total_recv_wrs": 4096, 00:15:01.455 "recv_doorbell_updates": 1 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "nvmf_tgt_poll_group_1", 00:15:01.455 "admin_qpairs": 2, 00:15:01.455 "io_qpairs": 26, 00:15:01.455 "current_admin_qpairs": 0, 00:15:01.455 "current_io_qpairs": 0, 00:15:01.455 "pending_bdev_io": 0, 00:15:01.455 "completed_nvme_io": 77, 00:15:01.455 "transports": [ 00:15:01.455 { 00:15:01.455 "trtype": "RDMA", 00:15:01.455 "pending_data_buffer": 0, 00:15:01.455 "devices": [ 00:15:01.455 { 00:15:01.455 "name": "mlx5_0", 00:15:01.455 "polls": 3403151, 00:15:01.455 "idle_polls": 3402911, 00:15:01.455 "completions": 260, 00:15:01.455 "requests": 130, 00:15:01.455 "request_latency": 21626562, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 205, 00:15:01.455 "send_doorbell_updates": 118, 00:15:01.455 "total_recv_wrs": 4226, 00:15:01.455 "recv_doorbell_updates": 119 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "mlx5_1", 00:15:01.455 "polls": 3403151, 00:15:01.455 "idle_polls": 3403151, 00:15:01.455 "completions": 0, 00:15:01.455 "requests": 0, 00:15:01.455 "request_latency": 0, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 0, 00:15:01.455 "send_doorbell_updates": 0, 00:15:01.455 "total_recv_wrs": 4096, 00:15:01.455 "recv_doorbell_updates": 1 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "nvmf_tgt_poll_group_2", 00:15:01.455 "admin_qpairs": 1, 00:15:01.455 "io_qpairs": 26, 00:15:01.455 "current_admin_qpairs": 0, 00:15:01.455 "current_io_qpairs": 0, 00:15:01.455 "pending_bdev_io": 0, 00:15:01.455 "completed_nvme_io": 125, 00:15:01.455 "transports": [ 00:15:01.455 { 00:15:01.455 "trtype": "RDMA", 00:15:01.455 "pending_data_buffer": 0, 00:15:01.455 "devices": [ 00:15:01.455 { 00:15:01.455 "name": "mlx5_0", 00:15:01.455 "polls": 3455609, 00:15:01.455 "idle_polls": 3455344, 00:15:01.455 "completions": 307, 00:15:01.455 "requests": 153, 00:15:01.455 "request_latency": 32409478, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 266, 00:15:01.455 "send_doorbell_updates": 130, 00:15:01.455 "total_recv_wrs": 4249, 00:15:01.455 "recv_doorbell_updates": 130 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "mlx5_1", 00:15:01.455 "polls": 3455609, 00:15:01.455 "idle_polls": 3455609, 00:15:01.455 "completions": 0, 00:15:01.455 "requests": 0, 00:15:01.455 "request_latency": 0, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 0, 00:15:01.455 "send_doorbell_updates": 0, 00:15:01.455 "total_recv_wrs": 4096, 00:15:01.455 "recv_doorbell_updates": 1 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "nvmf_tgt_poll_group_3", 00:15:01.455 "admin_qpairs": 2, 00:15:01.455 "io_qpairs": 26, 00:15:01.455 "current_admin_qpairs": 0, 00:15:01.455 "current_io_qpairs": 0, 00:15:01.455 "pending_bdev_io": 0, 00:15:01.455 "completed_nvme_io": 175, 00:15:01.455 "transports": [ 00:15:01.455 { 00:15:01.455 "trtype": "RDMA", 00:15:01.455 "pending_data_buffer": 0, 00:15:01.455 "devices": [ 00:15:01.455 { 00:15:01.455 "name": "mlx5_0", 00:15:01.455 "polls": 2678649, 00:15:01.455 "idle_polls": 2678256, 00:15:01.455 "completions": 454, 00:15:01.455 "requests": 227, 00:15:01.455 "request_latency": 50172474, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 400, 00:15:01.455 "send_doorbell_updates": 190, 00:15:01.455 "total_recv_wrs": 4323, 00:15:01.455 "recv_doorbell_updates": 191 00:15:01.455 }, 00:15:01.455 { 00:15:01.455 "name": "mlx5_1", 00:15:01.455 "polls": 2678649, 00:15:01.455 "idle_polls": 2678649, 00:15:01.455 "completions": 0, 00:15:01.455 "requests": 0, 00:15:01.455 "request_latency": 0, 00:15:01.455 "pending_free_request": 0, 00:15:01.455 "pending_rdma_read": 0, 00:15:01.455 "pending_rdma_write": 0, 00:15:01.455 "pending_rdma_send": 0, 00:15:01.455 "total_send_wrs": 0, 00:15:01.455 "send_doorbell_updates": 0, 00:15:01.455 "total_recv_wrs": 4096, 00:15:01.455 "recv_doorbell_updates": 1 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 } 00:15:01.455 ] 00:15:01.455 }' 00:15:01.455 11:42:32 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:01.455 11:42:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:01.455 11:42:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:01.456 11:42:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.456 11:42:32 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:01.456 11:42:32 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:01.456 11:42:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:01.456 11:42:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:01.456 11:42:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.713 11:42:32 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:01.713 11:42:32 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:01.713 11:42:32 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:01.713 11:42:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:01.713 11:42:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:01.713 11:42:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.713 11:42:32 -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:15:01.713 11:42:32 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:01.713 11:42:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:01.713 11:42:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:01.713 11:42:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:01.713 11:42:32 -- target/rpc.sh@118 -- # (( 126499010 > 0 )) 00:15:01.713 11:42:32 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:01.713 11:42:32 -- target/rpc.sh@123 -- # nvmftestfini 00:15:01.713 11:42:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.713 11:42:32 -- nvmf/common.sh@116 -- # sync 00:15:01.713 11:42:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:01.713 11:42:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:01.713 11:42:32 -- nvmf/common.sh@119 -- # set +e 00:15:01.713 11:42:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.713 11:42:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:01.713 rmmod nvme_rdma 00:15:01.713 rmmod nvme_fabrics 00:15:01.713 11:42:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.714 11:42:32 -- nvmf/common.sh@123 -- # set -e 00:15:01.714 11:42:32 -- nvmf/common.sh@124 -- # return 0 00:15:01.714 11:42:32 -- nvmf/common.sh@477 -- # '[' -n 3685633 ']' 00:15:01.714 11:42:32 -- nvmf/common.sh@478 -- # killprocess 3685633 00:15:01.714 11:42:32 -- common/autotest_common.sh@936 -- # '[' -z 3685633 ']' 00:15:01.714 11:42:32 -- common/autotest_common.sh@940 -- # kill -0 3685633 00:15:01.714 11:42:32 -- common/autotest_common.sh@941 -- # uname 00:15:01.714 11:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.714 11:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3685633 00:15:01.714 11:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:01.714 11:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:01.714 11:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3685633' 00:15:01.714 killing process with pid 3685633 00:15:01.714 11:42:32 -- common/autotest_common.sh@955 -- # kill 3685633 00:15:01.714 11:42:32 -- common/autotest_common.sh@960 -- # wait 3685633 00:15:02.280 11:42:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.280 11:42:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:02.280 00:15:02.280 real 0m38.043s 00:15:02.280 user 2m4.631s 00:15:02.280 sys 0m7.070s 00:15:02.280 11:42:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.280 11:42:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.280 ************************************ 00:15:02.280 END TEST nvmf_rpc 00:15:02.280 ************************************ 00:15:02.280 11:42:32 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:02.280 11:42:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.280 11:42:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.280 11:42:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.280 ************************************ 00:15:02.280 START TEST nvmf_invalid 00:15:02.280 ************************************ 00:15:02.280 11:42:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:02.280 * Looking for test storage... 00:15:02.280 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:02.280 11:42:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:02.280 11:42:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:02.280 11:42:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:02.280 11:42:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:02.280 11:42:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:02.280 11:42:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:02.280 11:42:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:02.280 11:42:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:02.281 11:42:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:02.281 11:42:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.281 11:42:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:02.281 11:42:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:02.281 11:42:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:02.281 11:42:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:02.281 11:42:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:02.281 11:42:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:02.281 11:42:32 -- scripts/common.sh@344 -- # : 1 00:15:02.281 11:42:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:02.281 11:42:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.281 11:42:32 -- scripts/common.sh@364 -- # decimal 1 00:15:02.281 11:42:32 -- scripts/common.sh@352 -- # local d=1 00:15:02.281 11:42:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.281 11:42:32 -- scripts/common.sh@354 -- # echo 1 00:15:02.281 11:42:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:02.281 11:42:32 -- scripts/common.sh@365 -- # decimal 2 00:15:02.281 11:42:32 -- scripts/common.sh@352 -- # local d=2 00:15:02.281 11:42:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.281 11:42:32 -- scripts/common.sh@354 -- # echo 2 00:15:02.281 11:42:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:02.281 11:42:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:02.281 11:42:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:02.281 11:42:32 -- scripts/common.sh@367 -- # return 0 00:15:02.281 11:42:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.281 11:42:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:02.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.281 --rc genhtml_branch_coverage=1 00:15:02.281 --rc genhtml_function_coverage=1 00:15:02.281 --rc genhtml_legend=1 00:15:02.281 --rc geninfo_all_blocks=1 00:15:02.281 --rc geninfo_unexecuted_blocks=1 00:15:02.281 00:15:02.281 ' 00:15:02.281 11:42:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:02.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.281 --rc genhtml_branch_coverage=1 00:15:02.281 --rc genhtml_function_coverage=1 00:15:02.281 --rc genhtml_legend=1 00:15:02.281 --rc geninfo_all_blocks=1 00:15:02.281 --rc geninfo_unexecuted_blocks=1 00:15:02.281 00:15:02.281 ' 00:15:02.281 11:42:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:02.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.281 --rc genhtml_branch_coverage=1 00:15:02.281 --rc genhtml_function_coverage=1 00:15:02.281 --rc genhtml_legend=1 00:15:02.281 --rc geninfo_all_blocks=1 00:15:02.281 --rc geninfo_unexecuted_blocks=1 00:15:02.281 00:15:02.281 ' 00:15:02.281 11:42:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:02.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.281 --rc genhtml_branch_coverage=1 00:15:02.281 --rc genhtml_function_coverage=1 00:15:02.281 --rc genhtml_legend=1 00:15:02.281 --rc geninfo_all_blocks=1 00:15:02.281 --rc geninfo_unexecuted_blocks=1 00:15:02.281 00:15:02.281 ' 00:15:02.281 11:42:32 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.281 11:42:32 -- nvmf/common.sh@7 -- # uname -s 00:15:02.281 11:42:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.281 11:42:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.281 11:42:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.281 11:42:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.281 11:42:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.281 11:42:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.281 11:42:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.281 11:42:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.281 11:42:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.281 11:42:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.281 11:42:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:02.281 11:42:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:02.281 11:42:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.281 11:42:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.281 11:42:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.281 11:42:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:02.281 11:42:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.281 11:42:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.281 11:42:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.281 11:42:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.281 11:42:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.281 11:42:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.281 11:42:32 -- paths/export.sh@5 -- # export PATH 00:15:02.281 11:42:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.281 11:42:32 -- nvmf/common.sh@46 -- # : 0 00:15:02.281 11:42:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.281 11:42:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.281 11:42:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.281 11:42:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.281 11:42:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.281 11:42:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.281 11:42:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.281 11:42:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.281 11:42:32 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:02.281 11:42:32 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:02.281 11:42:32 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:02.281 11:42:32 -- target/invalid.sh@14 -- # target=foobar 00:15:02.281 11:42:32 -- target/invalid.sh@16 -- # RANDOM=0 00:15:02.281 11:42:32 -- target/invalid.sh@34 -- # nvmftestinit 00:15:02.281 11:42:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:02.281 11:42:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.281 11:42:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.281 11:42:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.281 11:42:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.281 11:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.281 11:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.281 11:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.281 11:42:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.281 11:42:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.281 11:42:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.281 11:42:32 -- common/autotest_common.sh@10 -- # set +x 00:15:08.839 11:42:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.839 11:42:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.839 11:42:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.839 11:42:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.839 11:42:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.839 11:42:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.839 11:42:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.839 11:42:39 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.839 11:42:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.839 11:42:39 -- nvmf/common.sh@295 -- # e810=() 00:15:08.839 11:42:39 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.839 11:42:39 -- nvmf/common.sh@296 -- # x722=() 00:15:08.839 11:42:39 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.839 11:42:39 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.839 11:42:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.839 11:42:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.839 11:42:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.839 11:42:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:08.839 11:42:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:08.839 11:42:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:08.839 11:42:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.839 11:42:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.839 11:42:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:08.839 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:08.839 11:42:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.839 11:42:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.839 11:42:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:08.839 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:08.839 11:42:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.839 11:42:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.839 11:42:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:08.839 11:42:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.839 11:42:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.839 11:42:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.839 11:42:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.840 11:42:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:08.840 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.840 11:42:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.840 11:42:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.840 11:42:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.840 11:42:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:08.840 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.840 11:42:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.840 11:42:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.840 11:42:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:08.840 11:42:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:08.840 11:42:39 -- nvmf/common.sh@57 -- # uname 00:15:08.840 11:42:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:08.840 11:42:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:08.840 11:42:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:08.840 11:42:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:08.840 11:42:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:08.840 11:42:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:08.840 11:42:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:08.840 11:42:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:08.840 11:42:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:08.840 11:42:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:08.840 11:42:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:08.840 11:42:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.840 11:42:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.840 11:42:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.840 11:42:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.840 11:42:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.840 11:42:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.840 11:42:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.840 11:42:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.840 11:42:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.840 11:42:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:08.840 11:42:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:08.840 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.840 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:08.840 altname enp217s0f0np0 00:15:08.840 altname ens818f0np0 00:15:08.840 inet 192.168.100.8/24 scope global mlx_0_0 00:15:08.840 valid_lft forever preferred_lft forever 00:15:08.840 11:42:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.840 11:42:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.840 11:42:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:08.840 11:42:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:08.840 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.840 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:08.840 altname enp217s0f1np1 00:15:08.840 altname ens818f1np1 00:15:08.840 inet 192.168.100.9/24 scope global mlx_0_1 00:15:08.840 valid_lft forever preferred_lft forever 00:15:08.840 11:42:39 -- nvmf/common.sh@410 -- # return 0 00:15:08.840 11:42:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.840 11:42:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:08.840 11:42:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:08.840 11:42:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:08.840 11:42:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.840 11:42:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.840 11:42:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.840 11:42:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.840 11:42:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.840 11:42:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.840 11:42:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.840 11:42:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.840 11:42:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.840 11:42:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.840 11:42:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.840 11:42:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.840 11:42:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.840 11:42:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.840 11:42:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:08.840 192.168.100.9' 00:15:08.840 11:42:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:08.840 192.168.100.9' 00:15:08.840 11:42:39 -- nvmf/common.sh@445 -- # head -n 1 00:15:08.840 11:42:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:08.840 11:42:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:08.840 192.168.100.9' 00:15:08.840 11:42:39 -- nvmf/common.sh@446 -- # tail -n +2 00:15:08.840 11:42:39 -- nvmf/common.sh@446 -- # head -n 1 00:15:08.840 11:42:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:08.840 11:42:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:08.840 11:42:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:08.840 11:42:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:08.840 11:42:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:08.840 11:42:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:08.840 11:42:39 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:08.840 11:42:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.840 11:42:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.840 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:15:08.840 11:42:39 -- nvmf/common.sh@469 -- # nvmfpid=3694838 00:15:08.840 11:42:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.840 11:42:39 -- nvmf/common.sh@470 -- # waitforlisten 3694838 00:15:08.840 11:42:39 -- common/autotest_common.sh@829 -- # '[' -z 3694838 ']' 00:15:08.840 11:42:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.840 11:42:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.840 11:42:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.840 11:42:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.840 11:42:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.098 [2024-12-03 11:42:39.463044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:09.098 [2024-12-03 11:42:39.463093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.098 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.098 [2024-12-03 11:42:39.534168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.098 [2024-12-03 11:42:39.607883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.098 [2024-12-03 11:42:39.607989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.098 [2024-12-03 11:42:39.607999] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.098 [2024-12-03 11:42:39.608008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.098 [2024-12-03 11:42:39.608052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.098 [2024-12-03 11:42:39.608167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.098 [2024-12-03 11:42:39.608189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.098 [2024-12-03 11:42:39.608190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.031 11:42:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.031 11:42:40 -- common/autotest_common.sh@862 -- # return 0 00:15:10.031 11:42:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.031 11:42:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.031 11:42:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.031 11:42:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.031 11:42:40 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:10.031 11:42:40 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7123 00:15:10.031 [2024-12-03 11:42:40.484826] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:10.031 11:42:40 -- target/invalid.sh@40 -- # out='request: 00:15:10.031 { 00:15:10.031 "nqn": "nqn.2016-06.io.spdk:cnode7123", 00:15:10.031 "tgt_name": "foobar", 00:15:10.031 "method": "nvmf_create_subsystem", 00:15:10.031 "req_id": 1 00:15:10.031 } 00:15:10.031 Got JSON-RPC error response 00:15:10.031 response: 00:15:10.031 { 00:15:10.031 "code": -32603, 00:15:10.031 "message": "Unable to find target foobar" 00:15:10.031 }' 00:15:10.031 11:42:40 -- target/invalid.sh@41 -- # [[ request: 00:15:10.031 { 00:15:10.031 "nqn": "nqn.2016-06.io.spdk:cnode7123", 00:15:10.031 "tgt_name": "foobar", 00:15:10.031 "method": "nvmf_create_subsystem", 00:15:10.031 "req_id": 1 00:15:10.031 } 00:15:10.031 Got JSON-RPC error response 00:15:10.031 response: 00:15:10.031 { 00:15:10.031 "code": -32603, 00:15:10.031 "message": "Unable to find target foobar" 00:15:10.031 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:10.032 11:42:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:10.032 11:42:40 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8731 00:15:10.289 [2024-12-03 11:42:40.685537] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8731: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:10.289 11:42:40 -- target/invalid.sh@45 -- # out='request: 00:15:10.289 { 00:15:10.289 "nqn": "nqn.2016-06.io.spdk:cnode8731", 00:15:10.289 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:10.289 "method": "nvmf_create_subsystem", 00:15:10.289 "req_id": 1 00:15:10.289 } 00:15:10.289 Got JSON-RPC error response 00:15:10.289 response: 00:15:10.289 { 00:15:10.289 "code": -32602, 00:15:10.289 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:10.289 }' 00:15:10.289 11:42:40 -- target/invalid.sh@46 -- # [[ request: 00:15:10.289 { 00:15:10.289 "nqn": "nqn.2016-06.io.spdk:cnode8731", 00:15:10.289 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:10.289 "method": "nvmf_create_subsystem", 00:15:10.289 "req_id": 1 00:15:10.289 } 00:15:10.289 Got JSON-RPC error response 00:15:10.289 response: 00:15:10.289 { 00:15:10.289 "code": -32602, 00:15:10.289 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:10.289 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:10.290 11:42:40 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:10.290 11:42:40 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22599 00:15:10.290 [2024-12-03 11:42:40.890147] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22599: invalid model number 'SPDK_Controller' 00:15:10.546 11:42:40 -- target/invalid.sh@50 -- # out='request: 00:15:10.546 { 00:15:10.546 "nqn": "nqn.2016-06.io.spdk:cnode22599", 00:15:10.546 "model_number": "SPDK_Controller\u001f", 00:15:10.546 "method": "nvmf_create_subsystem", 00:15:10.546 "req_id": 1 00:15:10.546 } 00:15:10.546 Got JSON-RPC error response 00:15:10.546 response: 00:15:10.546 { 00:15:10.546 "code": -32602, 00:15:10.546 "message": "Invalid MN SPDK_Controller\u001f" 00:15:10.546 }' 00:15:10.546 11:42:40 -- target/invalid.sh@51 -- # [[ request: 00:15:10.546 { 00:15:10.546 "nqn": "nqn.2016-06.io.spdk:cnode22599", 00:15:10.546 "model_number": "SPDK_Controller\u001f", 00:15:10.546 "method": "nvmf_create_subsystem", 00:15:10.546 "req_id": 1 00:15:10.546 } 00:15:10.546 Got JSON-RPC error response 00:15:10.546 response: 00:15:10.546 { 00:15:10.546 "code": -32602, 00:15:10.546 "message": "Invalid MN SPDK_Controller\u001f" 00:15:10.546 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:10.546 11:42:40 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:10.546 11:42:40 -- target/invalid.sh@19 -- # local length=21 ll 00:15:10.546 11:42:40 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:10.546 11:42:40 -- target/invalid.sh@21 -- # local chars 00:15:10.546 11:42:40 -- target/invalid.sh@22 -- # local string 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 88 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=X 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 67 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=C 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 109 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=m 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 95 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=_ 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 99 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=c 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 73 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=I 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 106 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=j 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 99 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # string+=c 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.546 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.546 11:42:40 -- target/invalid.sh@25 -- # printf %x 98 00:15:10.547 11:42:40 -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:10.547 11:42:40 -- target/invalid.sh@25 -- # string+=b 00:15:10.547 11:42:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 64 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=@ 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 99 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=c 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 51 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=3 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 115 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=s 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 122 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=z 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 126 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+='~' 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 114 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=r 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 69 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=E 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 114 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=r 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 63 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+='?' 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 76 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=L 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # printf %x 52 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:10.547 11:42:41 -- target/invalid.sh@25 -- # string+=4 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.547 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.547 11:42:41 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:15:10.547 11:42:41 -- target/invalid.sh@31 -- # echo 'XCm_cIjcb@c3sz~rEr?L4' 00:15:10.547 11:42:41 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XCm_cIjcb@c3sz~rEr?L4' nqn.2016-06.io.spdk:cnode366 00:15:10.804 [2024-12-03 11:42:41.255375] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode366: invalid serial number 'XCm_cIjcb@c3sz~rEr?L4' 00:15:10.804 11:42:41 -- target/invalid.sh@54 -- # out='request: 00:15:10.804 { 00:15:10.804 "nqn": "nqn.2016-06.io.spdk:cnode366", 00:15:10.804 "serial_number": "XCm_cIjcb@c3sz~rEr?L4", 00:15:10.804 "method": "nvmf_create_subsystem", 00:15:10.804 "req_id": 1 00:15:10.804 } 00:15:10.804 Got JSON-RPC error response 00:15:10.804 response: 00:15:10.804 { 00:15:10.804 "code": -32602, 00:15:10.804 "message": "Invalid SN XCm_cIjcb@c3sz~rEr?L4" 00:15:10.804 }' 00:15:10.804 11:42:41 -- target/invalid.sh@55 -- # [[ request: 00:15:10.804 { 00:15:10.804 "nqn": "nqn.2016-06.io.spdk:cnode366", 00:15:10.804 "serial_number": "XCm_cIjcb@c3sz~rEr?L4", 00:15:10.804 "method": "nvmf_create_subsystem", 00:15:10.804 "req_id": 1 00:15:10.804 } 00:15:10.804 Got JSON-RPC error response 00:15:10.804 response: 00:15:10.804 { 00:15:10.804 "code": -32602, 00:15:10.804 "message": "Invalid SN XCm_cIjcb@c3sz~rEr?L4" 00:15:10.804 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:10.804 11:42:41 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:10.804 11:42:41 -- target/invalid.sh@19 -- # local length=41 ll 00:15:10.804 11:42:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:10.804 11:42:41 -- target/invalid.sh@21 -- # local chars 00:15:10.804 11:42:41 -- target/invalid.sh@22 -- # local string 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 103 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=g 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 61 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+== 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 76 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=L 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 123 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+='{' 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 82 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=R 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 115 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=s 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 86 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=V 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 90 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=Z 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 88 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=X 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 72 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=H 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 40 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+='(' 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 90 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=Z 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 120 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # string+=x 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.804 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.804 11:42:41 -- target/invalid.sh@25 -- # printf %x 126 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # string+='~' 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # printf %x 119 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # string+=w 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # printf %x 43 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:10.805 11:42:41 -- target/invalid.sh@25 -- # string+=+ 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:10.805 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 102 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=f 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 96 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+='`' 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 81 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=Q 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 121 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=y 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 122 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=z 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 53 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=5 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 112 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=p 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 65 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=A 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 77 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=M 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 102 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=f 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 63 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+='?' 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 108 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=l 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 123 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+='{' 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 48 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=0 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 50 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=2 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 37 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=% 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 121 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=y 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 89 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=Y 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 92 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+='\' 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 38 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+='&' 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 71 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=G 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 81 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=Q 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 76 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=L 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 110 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=n 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # printf %x 99 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:11.077 11:42:41 -- target/invalid.sh@25 -- # string+=c 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.077 11:42:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.077 11:42:41 -- target/invalid.sh@28 -- # [[ g == \- ]] 00:15:11.077 11:42:41 -- target/invalid.sh@31 -- # echo 'g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\&GQLnc' 00:15:11.077 11:42:41 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\&GQLnc' nqn.2016-06.io.spdk:cnode13404 00:15:11.336 [2024-12-03 11:42:41.765078] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13404: invalid model number 'g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\&GQLnc' 00:15:11.336 11:42:41 -- target/invalid.sh@58 -- # out='request: 00:15:11.336 { 00:15:11.336 "nqn": "nqn.2016-06.io.spdk:cnode13404", 00:15:11.336 "model_number": "g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\\&GQLnc", 00:15:11.336 "method": "nvmf_create_subsystem", 00:15:11.336 "req_id": 1 00:15:11.336 } 00:15:11.336 Got JSON-RPC error response 00:15:11.336 response: 00:15:11.336 { 00:15:11.336 "code": -32602, 00:15:11.336 "message": "Invalid MN g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\\&GQLnc" 00:15:11.336 }' 00:15:11.336 11:42:41 -- target/invalid.sh@59 -- # [[ request: 00:15:11.336 { 00:15:11.336 "nqn": "nqn.2016-06.io.spdk:cnode13404", 00:15:11.336 "model_number": "g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\\&GQLnc", 00:15:11.336 "method": "nvmf_create_subsystem", 00:15:11.336 "req_id": 1 00:15:11.336 } 00:15:11.336 Got JSON-RPC error response 00:15:11.336 response: 00:15:11.336 { 00:15:11.336 "code": -32602, 00:15:11.336 "message": "Invalid MN g=L{RsVZXH(Zx~w+f`Qyz5pAMf?l{02%yY\\&GQLnc" 00:15:11.336 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:11.336 11:42:41 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:11.594 [2024-12-03 11:42:41.975806] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7ee970/0x7f2e60) succeed. 00:15:11.594 [2024-12-03 11:42:41.985031] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7eff60/0x834500) succeed. 00:15:11.594 11:42:42 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:11.852 11:42:42 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:11.852 11:42:42 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:11.852 192.168.100.9' 00:15:11.852 11:42:42 -- target/invalid.sh@67 -- # head -n 1 00:15:11.852 11:42:42 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:11.852 11:42:42 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:12.109 [2024-12-03 11:42:42.498042] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:12.110 11:42:42 -- target/invalid.sh@69 -- # out='request: 00:15:12.110 { 00:15:12.110 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:12.110 "listen_address": { 00:15:12.110 "trtype": "rdma", 00:15:12.110 "traddr": "192.168.100.8", 00:15:12.110 "trsvcid": "4421" 00:15:12.110 }, 00:15:12.110 "method": "nvmf_subsystem_remove_listener", 00:15:12.110 "req_id": 1 00:15:12.110 } 00:15:12.110 Got JSON-RPC error response 00:15:12.110 response: 00:15:12.110 { 00:15:12.110 "code": -32602, 00:15:12.110 "message": "Invalid parameters" 00:15:12.110 }' 00:15:12.110 11:42:42 -- target/invalid.sh@70 -- # [[ request: 00:15:12.110 { 00:15:12.110 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:12.110 "listen_address": { 00:15:12.110 "trtype": "rdma", 00:15:12.110 "traddr": "192.168.100.8", 00:15:12.110 "trsvcid": "4421" 00:15:12.110 }, 00:15:12.110 "method": "nvmf_subsystem_remove_listener", 00:15:12.110 "req_id": 1 00:15:12.110 } 00:15:12.110 Got JSON-RPC error response 00:15:12.110 response: 00:15:12.110 { 00:15:12.110 "code": -32602, 00:15:12.110 "message": "Invalid parameters" 00:15:12.110 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:12.110 11:42:42 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28974 -i 0 00:15:12.110 [2024-12-03 11:42:42.690650] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28974: invalid cntlid range [0-65519] 00:15:12.368 11:42:42 -- target/invalid.sh@73 -- # out='request: 00:15:12.368 { 00:15:12.368 "nqn": "nqn.2016-06.io.spdk:cnode28974", 00:15:12.368 "min_cntlid": 0, 00:15:12.368 "method": "nvmf_create_subsystem", 00:15:12.368 "req_id": 1 00:15:12.368 } 00:15:12.368 Got JSON-RPC error response 00:15:12.368 response: 00:15:12.368 { 00:15:12.368 "code": -32602, 00:15:12.368 "message": "Invalid cntlid range [0-65519]" 00:15:12.368 }' 00:15:12.368 11:42:42 -- target/invalid.sh@74 -- # [[ request: 00:15:12.368 { 00:15:12.368 "nqn": "nqn.2016-06.io.spdk:cnode28974", 00:15:12.368 "min_cntlid": 0, 00:15:12.368 "method": "nvmf_create_subsystem", 00:15:12.368 "req_id": 1 00:15:12.368 } 00:15:12.368 Got JSON-RPC error response 00:15:12.368 response: 00:15:12.368 { 00:15:12.368 "code": -32602, 00:15:12.368 "message": "Invalid cntlid range [0-65519]" 00:15:12.368 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:12.368 11:42:42 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5522 -i 65520 00:15:12.368 [2024-12-03 11:42:42.883330] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5522: invalid cntlid range [65520-65519] 00:15:12.368 11:42:42 -- target/invalid.sh@75 -- # out='request: 00:15:12.368 { 00:15:12.368 "nqn": "nqn.2016-06.io.spdk:cnode5522", 00:15:12.368 "min_cntlid": 65520, 00:15:12.368 "method": "nvmf_create_subsystem", 00:15:12.368 "req_id": 1 00:15:12.368 } 00:15:12.368 Got JSON-RPC error response 00:15:12.368 response: 00:15:12.368 { 00:15:12.368 "code": -32602, 00:15:12.368 "message": "Invalid cntlid range [65520-65519]" 00:15:12.368 }' 00:15:12.368 11:42:42 -- target/invalid.sh@76 -- # [[ request: 00:15:12.368 { 00:15:12.368 "nqn": "nqn.2016-06.io.spdk:cnode5522", 00:15:12.368 "min_cntlid": 65520, 00:15:12.368 "method": "nvmf_create_subsystem", 00:15:12.368 "req_id": 1 00:15:12.368 } 00:15:12.368 Got JSON-RPC error response 00:15:12.368 response: 00:15:12.368 { 00:15:12.368 "code": -32602, 00:15:12.368 "message": "Invalid cntlid range [65520-65519]" 00:15:12.368 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:12.368 11:42:42 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12122 -I 0 00:15:12.626 [2024-12-03 11:42:43.064005] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12122: invalid cntlid range [1-0] 00:15:12.626 11:42:43 -- target/invalid.sh@77 -- # out='request: 00:15:12.626 { 00:15:12.626 "nqn": "nqn.2016-06.io.spdk:cnode12122", 00:15:12.626 "max_cntlid": 0, 00:15:12.626 "method": "nvmf_create_subsystem", 00:15:12.626 "req_id": 1 00:15:12.626 } 00:15:12.626 Got JSON-RPC error response 00:15:12.626 response: 00:15:12.626 { 00:15:12.626 "code": -32602, 00:15:12.626 "message": "Invalid cntlid range [1-0]" 00:15:12.626 }' 00:15:12.626 11:42:43 -- target/invalid.sh@78 -- # [[ request: 00:15:12.626 { 00:15:12.626 "nqn": "nqn.2016-06.io.spdk:cnode12122", 00:15:12.626 "max_cntlid": 0, 00:15:12.626 "method": "nvmf_create_subsystem", 00:15:12.626 "req_id": 1 00:15:12.626 } 00:15:12.626 Got JSON-RPC error response 00:15:12.626 response: 00:15:12.626 { 00:15:12.626 "code": -32602, 00:15:12.626 "message": "Invalid cntlid range [1-0]" 00:15:12.626 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:12.626 11:42:43 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11362 -I 65520 00:15:12.884 [2024-12-03 11:42:43.252670] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11362: invalid cntlid range [1-65520] 00:15:12.884 11:42:43 -- target/invalid.sh@79 -- # out='request: 00:15:12.884 { 00:15:12.884 "nqn": "nqn.2016-06.io.spdk:cnode11362", 00:15:12.884 "max_cntlid": 65520, 00:15:12.884 "method": "nvmf_create_subsystem", 00:15:12.884 "req_id": 1 00:15:12.884 } 00:15:12.884 Got JSON-RPC error response 00:15:12.884 response: 00:15:12.884 { 00:15:12.884 "code": -32602, 00:15:12.884 "message": "Invalid cntlid range [1-65520]" 00:15:12.884 }' 00:15:12.884 11:42:43 -- target/invalid.sh@80 -- # [[ request: 00:15:12.884 { 00:15:12.884 "nqn": "nqn.2016-06.io.spdk:cnode11362", 00:15:12.884 "max_cntlid": 65520, 00:15:12.884 "method": "nvmf_create_subsystem", 00:15:12.884 "req_id": 1 00:15:12.884 } 00:15:12.884 Got JSON-RPC error response 00:15:12.884 response: 00:15:12.884 { 00:15:12.884 "code": -32602, 00:15:12.884 "message": "Invalid cntlid range [1-65520]" 00:15:12.884 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:12.884 11:42:43 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28504 -i 6 -I 5 00:15:12.884 [2024-12-03 11:42:43.433290] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28504: invalid cntlid range [6-5] 00:15:12.884 11:42:43 -- target/invalid.sh@83 -- # out='request: 00:15:12.884 { 00:15:12.884 "nqn": "nqn.2016-06.io.spdk:cnode28504", 00:15:12.884 "min_cntlid": 6, 00:15:12.884 "max_cntlid": 5, 00:15:12.884 "method": "nvmf_create_subsystem", 00:15:12.884 "req_id": 1 00:15:12.884 } 00:15:12.884 Got JSON-RPC error response 00:15:12.884 response: 00:15:12.884 { 00:15:12.884 "code": -32602, 00:15:12.884 "message": "Invalid cntlid range [6-5]" 00:15:12.884 }' 00:15:12.884 11:42:43 -- target/invalid.sh@84 -- # [[ request: 00:15:12.884 { 00:15:12.884 "nqn": "nqn.2016-06.io.spdk:cnode28504", 00:15:12.884 "min_cntlid": 6, 00:15:12.884 "max_cntlid": 5, 00:15:12.884 "method": "nvmf_create_subsystem", 00:15:12.884 "req_id": 1 00:15:12.884 } 00:15:12.884 Got JSON-RPC error response 00:15:12.884 response: 00:15:12.884 { 00:15:12.884 "code": -32602, 00:15:12.884 "message": "Invalid cntlid range [6-5]" 00:15:12.884 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:12.884 11:42:43 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:13.143 11:42:43 -- target/invalid.sh@87 -- # out='request: 00:15:13.143 { 00:15:13.143 "name": "foobar", 00:15:13.143 "method": "nvmf_delete_target", 00:15:13.143 "req_id": 1 00:15:13.143 } 00:15:13.143 Got JSON-RPC error response 00:15:13.143 response: 00:15:13.143 { 00:15:13.143 "code": -32602, 00:15:13.143 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:13.143 }' 00:15:13.143 11:42:43 -- target/invalid.sh@88 -- # [[ request: 00:15:13.143 { 00:15:13.143 "name": "foobar", 00:15:13.143 "method": "nvmf_delete_target", 00:15:13.143 "req_id": 1 00:15:13.143 } 00:15:13.143 Got JSON-RPC error response 00:15:13.143 response: 00:15:13.143 { 00:15:13.143 "code": -32602, 00:15:13.143 "message": "The specified target doesn't exist, cannot delete it." 00:15:13.143 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:13.143 11:42:43 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:13.143 11:42:43 -- target/invalid.sh@91 -- # nvmftestfini 00:15:13.143 11:42:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:13.143 11:42:43 -- nvmf/common.sh@116 -- # sync 00:15:13.143 11:42:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:13.143 11:42:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:13.143 11:42:43 -- nvmf/common.sh@119 -- # set +e 00:15:13.143 11:42:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:13.143 11:42:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:13.143 rmmod nvme_rdma 00:15:13.143 rmmod nvme_fabrics 00:15:13.143 11:42:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:13.143 11:42:43 -- nvmf/common.sh@123 -- # set -e 00:15:13.143 11:42:43 -- nvmf/common.sh@124 -- # return 0 00:15:13.143 11:42:43 -- nvmf/common.sh@477 -- # '[' -n 3694838 ']' 00:15:13.143 11:42:43 -- nvmf/common.sh@478 -- # killprocess 3694838 00:15:13.143 11:42:43 -- common/autotest_common.sh@936 -- # '[' -z 3694838 ']' 00:15:13.143 11:42:43 -- common/autotest_common.sh@940 -- # kill -0 3694838 00:15:13.143 11:42:43 -- common/autotest_common.sh@941 -- # uname 00:15:13.143 11:42:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.143 11:42:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3694838 00:15:13.143 11:42:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:13.143 11:42:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:13.143 11:42:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3694838' 00:15:13.143 killing process with pid 3694838 00:15:13.143 11:42:43 -- common/autotest_common.sh@955 -- # kill 3694838 00:15:13.143 11:42:43 -- common/autotest_common.sh@960 -- # wait 3694838 00:15:13.401 11:42:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.401 11:42:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:13.401 00:15:13.401 real 0m11.314s 00:15:13.401 user 0m21.341s 00:15:13.402 sys 0m6.284s 00:15:13.402 11:42:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.402 11:42:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 END TEST nvmf_invalid 00:15:13.402 ************************************ 00:15:13.402 11:42:44 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:13.402 11:42:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.402 11:42:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.402 11:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 START TEST nvmf_abort 00:15:13.402 ************************************ 00:15:13.402 11:42:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:13.660 * Looking for test storage... 00:15:13.660 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:13.660 11:42:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:13.660 11:42:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:13.660 11:42:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:13.660 11:42:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:13.660 11:42:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:13.660 11:42:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:13.660 11:42:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:13.660 11:42:44 -- scripts/common.sh@335 -- # IFS=.-: 00:15:13.660 11:42:44 -- scripts/common.sh@335 -- # read -ra ver1 00:15:13.660 11:42:44 -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.660 11:42:44 -- scripts/common.sh@336 -- # read -ra ver2 00:15:13.660 11:42:44 -- scripts/common.sh@337 -- # local 'op=<' 00:15:13.660 11:42:44 -- scripts/common.sh@339 -- # ver1_l=2 00:15:13.660 11:42:44 -- scripts/common.sh@340 -- # ver2_l=1 00:15:13.660 11:42:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:13.660 11:42:44 -- scripts/common.sh@343 -- # case "$op" in 00:15:13.660 11:42:44 -- scripts/common.sh@344 -- # : 1 00:15:13.660 11:42:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:13.660 11:42:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.660 11:42:44 -- scripts/common.sh@364 -- # decimal 1 00:15:13.660 11:42:44 -- scripts/common.sh@352 -- # local d=1 00:15:13.660 11:42:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.660 11:42:44 -- scripts/common.sh@354 -- # echo 1 00:15:13.660 11:42:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:13.660 11:42:44 -- scripts/common.sh@365 -- # decimal 2 00:15:13.660 11:42:44 -- scripts/common.sh@352 -- # local d=2 00:15:13.660 11:42:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.660 11:42:44 -- scripts/common.sh@354 -- # echo 2 00:15:13.660 11:42:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:13.661 11:42:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:13.661 11:42:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:13.661 11:42:44 -- scripts/common.sh@367 -- # return 0 00:15:13.661 11:42:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.661 11:42:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.661 --rc genhtml_branch_coverage=1 00:15:13.661 --rc genhtml_function_coverage=1 00:15:13.661 --rc genhtml_legend=1 00:15:13.661 --rc geninfo_all_blocks=1 00:15:13.661 --rc geninfo_unexecuted_blocks=1 00:15:13.661 00:15:13.661 ' 00:15:13.661 11:42:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.661 --rc genhtml_branch_coverage=1 00:15:13.661 --rc genhtml_function_coverage=1 00:15:13.661 --rc genhtml_legend=1 00:15:13.661 --rc geninfo_all_blocks=1 00:15:13.661 --rc geninfo_unexecuted_blocks=1 00:15:13.661 00:15:13.661 ' 00:15:13.661 11:42:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.661 --rc genhtml_branch_coverage=1 00:15:13.661 --rc genhtml_function_coverage=1 00:15:13.661 --rc genhtml_legend=1 00:15:13.661 --rc geninfo_all_blocks=1 00:15:13.661 --rc geninfo_unexecuted_blocks=1 00:15:13.661 00:15:13.661 ' 00:15:13.661 11:42:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.661 --rc genhtml_branch_coverage=1 00:15:13.661 --rc genhtml_function_coverage=1 00:15:13.661 --rc genhtml_legend=1 00:15:13.661 --rc geninfo_all_blocks=1 00:15:13.661 --rc geninfo_unexecuted_blocks=1 00:15:13.661 00:15:13.661 ' 00:15:13.661 11:42:44 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.661 11:42:44 -- nvmf/common.sh@7 -- # uname -s 00:15:13.661 11:42:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.661 11:42:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.661 11:42:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.661 11:42:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.661 11:42:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.661 11:42:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.661 11:42:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.661 11:42:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.661 11:42:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.661 11:42:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.661 11:42:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:13.661 11:42:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:13.661 11:42:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.661 11:42:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.661 11:42:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.661 11:42:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:13.661 11:42:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.661 11:42:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.661 11:42:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.661 11:42:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.661 11:42:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.661 11:42:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.661 11:42:44 -- paths/export.sh@5 -- # export PATH 00:15:13.661 11:42:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.661 11:42:44 -- nvmf/common.sh@46 -- # : 0 00:15:13.661 11:42:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:13.661 11:42:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:13.661 11:42:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:13.661 11:42:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.661 11:42:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.661 11:42:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:13.661 11:42:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:13.661 11:42:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:13.661 11:42:44 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.661 11:42:44 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:13.661 11:42:44 -- target/abort.sh@14 -- # nvmftestinit 00:15:13.661 11:42:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:13.661 11:42:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.661 11:42:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:13.661 11:42:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:13.661 11:42:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:13.661 11:42:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.661 11:42:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.661 11:42:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.661 11:42:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:13.661 11:42:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:13.661 11:42:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:13.661 11:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:20.220 11:42:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:20.220 11:42:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:20.220 11:42:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:20.220 11:42:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:20.220 11:42:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:20.220 11:42:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:20.220 11:42:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:20.220 11:42:50 -- nvmf/common.sh@294 -- # net_devs=() 00:15:20.220 11:42:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:20.220 11:42:50 -- nvmf/common.sh@295 -- # e810=() 00:15:20.220 11:42:50 -- nvmf/common.sh@295 -- # local -ga e810 00:15:20.220 11:42:50 -- nvmf/common.sh@296 -- # x722=() 00:15:20.220 11:42:50 -- nvmf/common.sh@296 -- # local -ga x722 00:15:20.220 11:42:50 -- nvmf/common.sh@297 -- # mlx=() 00:15:20.220 11:42:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:20.220 11:42:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.220 11:42:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:20.220 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:20.220 11:42:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.220 11:42:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:20.220 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:20.220 11:42:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.220 11:42:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.220 11:42:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.220 11:42:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:20.220 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:20.220 11:42:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.220 11:42:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.220 11:42:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:20.220 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:20.220 11:42:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.220 11:42:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:20.220 11:42:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:20.220 11:42:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:20.220 11:42:50 -- nvmf/common.sh@57 -- # uname 00:15:20.220 11:42:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:20.220 11:42:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:20.220 11:42:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:20.220 11:42:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:20.220 11:42:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:20.220 11:42:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:20.220 11:42:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:20.220 11:42:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:20.220 11:42:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:20.220 11:42:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:20.220 11:42:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:20.220 11:42:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.220 11:42:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:20.220 11:42:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:20.220 11:42:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.220 11:42:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:20.220 11:42:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:20.220 11:42:50 -- nvmf/common.sh@104 -- # continue 2 00:15:20.220 11:42:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.220 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:20.220 11:42:50 -- nvmf/common.sh@104 -- # continue 2 00:15:20.220 11:42:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:20.220 11:42:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:20.220 11:42:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:20.220 11:42:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:20.220 11:42:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:20.220 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.220 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:20.220 altname enp217s0f0np0 00:15:20.220 altname ens818f0np0 00:15:20.220 inet 192.168.100.8/24 scope global mlx_0_0 00:15:20.220 valid_lft forever preferred_lft forever 00:15:20.220 11:42:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:20.220 11:42:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:20.220 11:42:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:20.220 11:42:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:20.220 11:42:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:20.220 11:42:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:20.220 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.220 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:20.220 altname enp217s0f1np1 00:15:20.220 altname ens818f1np1 00:15:20.220 inet 192.168.100.9/24 scope global mlx_0_1 00:15:20.220 valid_lft forever preferred_lft forever 00:15:20.220 11:42:50 -- nvmf/common.sh@410 -- # return 0 00:15:20.220 11:42:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.220 11:42:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:20.220 11:42:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:20.220 11:42:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:20.479 11:42:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:20.479 11:42:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.479 11:42:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:20.479 11:42:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:20.479 11:42:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.479 11:42:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:20.479 11:42:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:20.479 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.479 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.479 11:42:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:20.479 11:42:50 -- nvmf/common.sh@104 -- # continue 2 00:15:20.479 11:42:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:20.479 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.479 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.479 11:42:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.479 11:42:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.479 11:42:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:20.479 11:42:50 -- nvmf/common.sh@104 -- # continue 2 00:15:20.479 11:42:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:20.479 11:42:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:20.479 11:42:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:20.479 11:42:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:20.479 11:42:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:20.479 11:42:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:20.479 11:42:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:20.479 11:42:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:20.479 192.168.100.9' 00:15:20.479 11:42:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:20.479 192.168.100.9' 00:15:20.479 11:42:50 -- nvmf/common.sh@445 -- # head -n 1 00:15:20.479 11:42:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:20.479 11:42:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:20.479 192.168.100.9' 00:15:20.479 11:42:50 -- nvmf/common.sh@446 -- # tail -n +2 00:15:20.479 11:42:50 -- nvmf/common.sh@446 -- # head -n 1 00:15:20.479 11:42:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:20.479 11:42:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:20.479 11:42:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:20.479 11:42:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:20.479 11:42:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:20.479 11:42:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:20.479 11:42:50 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:20.479 11:42:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.479 11:42:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.479 11:42:50 -- common/autotest_common.sh@10 -- # set +x 00:15:20.479 11:42:50 -- nvmf/common.sh@469 -- # nvmfpid=3699024 00:15:20.479 11:42:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:20.479 11:42:50 -- nvmf/common.sh@470 -- # waitforlisten 3699024 00:15:20.479 11:42:50 -- common/autotest_common.sh@829 -- # '[' -z 3699024 ']' 00:15:20.479 11:42:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.479 11:42:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.479 11:42:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.479 11:42:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.479 11:42:50 -- common/autotest_common.sh@10 -- # set +x 00:15:20.479 [2024-12-03 11:42:50.990508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.479 [2024-12-03 11:42:50.990552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.479 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.479 [2024-12-03 11:42:51.058631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.737 [2024-12-03 11:42:51.130728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.737 [2024-12-03 11:42:51.130843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.737 [2024-12-03 11:42:51.130853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.738 [2024-12-03 11:42:51.130865] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.738 [2024-12-03 11:42:51.130966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.738 [2024-12-03 11:42:51.131050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.738 [2024-12-03 11:42:51.131052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.303 11:42:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.303 11:42:51 -- common/autotest_common.sh@862 -- # return 0 00:15:21.303 11:42:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.303 11:42:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.303 11:42:51 -- common/autotest_common.sh@10 -- # set +x 00:15:21.303 11:42:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.303 11:42:51 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:21.303 11:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.303 11:42:51 -- common/autotest_common.sh@10 -- # set +x 00:15:21.303 [2024-12-03 11:42:51.884853] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2137860/0x213bd50) succeed. 00:15:21.303 [2024-12-03 11:42:51.893916] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2138db0/0x217d3f0) succeed. 00:15:21.562 11:42:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:51 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:21.562 11:42:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:51 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 Malloc0 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:21.562 11:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 Delay0 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:21.562 11:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:21.562 11:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:21.562 11:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 [2024-12-03 11:42:52.049060] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:21.562 11:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.562 11:42:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.562 11:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.562 11:42:52 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:21.562 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.562 [2024-12-03 11:42:52.146201] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:24.096 Initializing NVMe Controllers 00:15:24.096 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:24.096 controller IO queue size 128 less than required 00:15:24.096 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:24.096 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:24.096 Initialization complete. Launching workers. 00:15:24.096 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51795 00:15:24.096 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51856, failed to submit 62 00:15:24.096 success 51795, unsuccess 61, failed 0 00:15:24.096 11:42:54 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:24.096 11:42:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.096 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 11:42:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.096 11:42:54 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:24.096 11:42:54 -- target/abort.sh@38 -- # nvmftestfini 00:15:24.096 11:42:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.096 11:42:54 -- nvmf/common.sh@116 -- # sync 00:15:24.096 11:42:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:24.096 11:42:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:24.096 11:42:54 -- nvmf/common.sh@119 -- # set +e 00:15:24.096 11:42:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.096 11:42:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:24.096 rmmod nvme_rdma 00:15:24.096 rmmod nvme_fabrics 00:15:24.096 11:42:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:24.096 11:42:54 -- nvmf/common.sh@123 -- # set -e 00:15:24.096 11:42:54 -- nvmf/common.sh@124 -- # return 0 00:15:24.096 11:42:54 -- nvmf/common.sh@477 -- # '[' -n 3699024 ']' 00:15:24.096 11:42:54 -- nvmf/common.sh@478 -- # killprocess 3699024 00:15:24.096 11:42:54 -- common/autotest_common.sh@936 -- # '[' -z 3699024 ']' 00:15:24.096 11:42:54 -- common/autotest_common.sh@940 -- # kill -0 3699024 00:15:24.096 11:42:54 -- common/autotest_common.sh@941 -- # uname 00:15:24.096 11:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.096 11:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3699024 00:15:24.096 11:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:24.096 11:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:24.096 11:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3699024' 00:15:24.096 killing process with pid 3699024 00:15:24.096 11:42:54 -- common/autotest_common.sh@955 -- # kill 3699024 00:15:24.096 11:42:54 -- common/autotest_common.sh@960 -- # wait 3699024 00:15:24.096 11:42:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:24.096 11:42:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:24.096 00:15:24.096 real 0m10.649s 00:15:24.096 user 0m14.621s 00:15:24.096 sys 0m5.628s 00:15:24.096 11:42:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:24.096 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 ************************************ 00:15:24.096 END TEST nvmf_abort 00:15:24.096 ************************************ 00:15:24.096 11:42:54 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:24.096 11:42:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:24.096 11:42:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.096 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 ************************************ 00:15:24.096 START TEST nvmf_ns_hotplug_stress 00:15:24.096 ************************************ 00:15:24.096 11:42:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:24.354 * Looking for test storage... 00:15:24.354 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:24.354 11:42:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:24.354 11:42:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:24.354 11:42:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:24.354 11:42:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:24.354 11:42:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:24.354 11:42:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:24.354 11:42:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:24.354 11:42:54 -- scripts/common.sh@335 -- # IFS=.-: 00:15:24.354 11:42:54 -- scripts/common.sh@335 -- # read -ra ver1 00:15:24.354 11:42:54 -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.354 11:42:54 -- scripts/common.sh@336 -- # read -ra ver2 00:15:24.354 11:42:54 -- scripts/common.sh@337 -- # local 'op=<' 00:15:24.354 11:42:54 -- scripts/common.sh@339 -- # ver1_l=2 00:15:24.354 11:42:54 -- scripts/common.sh@340 -- # ver2_l=1 00:15:24.354 11:42:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:24.354 11:42:54 -- scripts/common.sh@343 -- # case "$op" in 00:15:24.354 11:42:54 -- scripts/common.sh@344 -- # : 1 00:15:24.354 11:42:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:24.354 11:42:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.354 11:42:54 -- scripts/common.sh@364 -- # decimal 1 00:15:24.354 11:42:54 -- scripts/common.sh@352 -- # local d=1 00:15:24.354 11:42:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.354 11:42:54 -- scripts/common.sh@354 -- # echo 1 00:15:24.354 11:42:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:24.354 11:42:54 -- scripts/common.sh@365 -- # decimal 2 00:15:24.354 11:42:54 -- scripts/common.sh@352 -- # local d=2 00:15:24.354 11:42:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.354 11:42:54 -- scripts/common.sh@354 -- # echo 2 00:15:24.354 11:42:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:24.354 11:42:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:24.354 11:42:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:24.354 11:42:54 -- scripts/common.sh@367 -- # return 0 00:15:24.354 11:42:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.354 11:42:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:24.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.354 --rc genhtml_branch_coverage=1 00:15:24.354 --rc genhtml_function_coverage=1 00:15:24.354 --rc genhtml_legend=1 00:15:24.354 --rc geninfo_all_blocks=1 00:15:24.354 --rc geninfo_unexecuted_blocks=1 00:15:24.354 00:15:24.354 ' 00:15:24.354 11:42:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:24.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.354 --rc genhtml_branch_coverage=1 00:15:24.354 --rc genhtml_function_coverage=1 00:15:24.354 --rc genhtml_legend=1 00:15:24.354 --rc geninfo_all_blocks=1 00:15:24.354 --rc geninfo_unexecuted_blocks=1 00:15:24.354 00:15:24.354 ' 00:15:24.354 11:42:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:24.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.354 --rc genhtml_branch_coverage=1 00:15:24.354 --rc genhtml_function_coverage=1 00:15:24.354 --rc genhtml_legend=1 00:15:24.354 --rc geninfo_all_blocks=1 00:15:24.355 --rc geninfo_unexecuted_blocks=1 00:15:24.355 00:15:24.355 ' 00:15:24.355 11:42:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:24.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.355 --rc genhtml_branch_coverage=1 00:15:24.355 --rc genhtml_function_coverage=1 00:15:24.355 --rc genhtml_legend=1 00:15:24.355 --rc geninfo_all_blocks=1 00:15:24.355 --rc geninfo_unexecuted_blocks=1 00:15:24.355 00:15:24.355 ' 00:15:24.355 11:42:54 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.355 11:42:54 -- nvmf/common.sh@7 -- # uname -s 00:15:24.355 11:42:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.355 11:42:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.355 11:42:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.355 11:42:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.355 11:42:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.355 11:42:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.355 11:42:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.355 11:42:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.355 11:42:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.355 11:42:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.355 11:42:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:24.355 11:42:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:24.355 11:42:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.355 11:42:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.355 11:42:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.355 11:42:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:24.355 11:42:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.355 11:42:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.355 11:42:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.355 11:42:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.355 11:42:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.355 11:42:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.355 11:42:54 -- paths/export.sh@5 -- # export PATH 00:15:24.355 11:42:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.355 11:42:54 -- nvmf/common.sh@46 -- # : 0 00:15:24.355 11:42:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:24.355 11:42:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:24.355 11:42:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:24.355 11:42:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.355 11:42:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.355 11:42:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:24.355 11:42:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:24.355 11:42:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:24.355 11:42:54 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:24.355 11:42:54 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:24.355 11:42:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:24.355 11:42:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.355 11:42:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:24.355 11:42:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:24.355 11:42:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:24.355 11:42:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.355 11:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.355 11:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.355 11:42:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:24.355 11:42:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:24.355 11:42:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:24.355 11:42:54 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 11:43:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:30.946 11:43:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:30.946 11:43:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:30.946 11:43:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:30.946 11:43:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:30.946 11:43:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:30.946 11:43:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:30.946 11:43:01 -- nvmf/common.sh@294 -- # net_devs=() 00:15:30.946 11:43:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:30.946 11:43:01 -- nvmf/common.sh@295 -- # e810=() 00:15:30.946 11:43:01 -- nvmf/common.sh@295 -- # local -ga e810 00:15:30.946 11:43:01 -- nvmf/common.sh@296 -- # x722=() 00:15:30.946 11:43:01 -- nvmf/common.sh@296 -- # local -ga x722 00:15:30.946 11:43:01 -- nvmf/common.sh@297 -- # mlx=() 00:15:30.946 11:43:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:30.946 11:43:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.946 11:43:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:30.946 11:43:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:30.946 11:43:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:30.946 11:43:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:30.946 11:43:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:30.946 11:43:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:30.946 11:43:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:30.946 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:30.946 11:43:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:30.946 11:43:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:30.946 11:43:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:30.946 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:30.946 11:43:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:30.946 11:43:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:30.946 11:43:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:30.946 11:43:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:30.946 11:43:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.947 11:43:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:30.947 11:43:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.947 11:43:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:30.947 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:30.947 11:43:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.947 11:43:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.947 11:43:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:30.947 11:43:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.947 11:43:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:30.947 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:30.947 11:43:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.947 11:43:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:30.947 11:43:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:30.947 11:43:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:30.947 11:43:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:30.947 11:43:01 -- nvmf/common.sh@57 -- # uname 00:15:30.947 11:43:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:30.947 11:43:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:30.947 11:43:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:30.947 11:43:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:30.947 11:43:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:30.947 11:43:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:30.947 11:43:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:30.947 11:43:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:30.947 11:43:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:30.947 11:43:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:30.947 11:43:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:30.947 11:43:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:30.947 11:43:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:30.947 11:43:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:30.947 11:43:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:30.947 11:43:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:30.947 11:43:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:30.947 11:43:01 -- nvmf/common.sh@104 -- # continue 2 00:15:30.947 11:43:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:30.947 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:30.947 11:43:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@104 -- # continue 2 00:15:31.205 11:43:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:31.205 11:43:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:31.205 11:43:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:31.205 11:43:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:31.205 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:31.205 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:31.205 altname enp217s0f0np0 00:15:31.205 altname ens818f0np0 00:15:31.205 inet 192.168.100.8/24 scope global mlx_0_0 00:15:31.205 valid_lft forever preferred_lft forever 00:15:31.205 11:43:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:31.205 11:43:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:31.205 11:43:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:31.205 11:43:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:31.205 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:31.205 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:31.205 altname enp217s0f1np1 00:15:31.205 altname ens818f1np1 00:15:31.205 inet 192.168.100.9/24 scope global mlx_0_1 00:15:31.205 valid_lft forever preferred_lft forever 00:15:31.205 11:43:01 -- nvmf/common.sh@410 -- # return 0 00:15:31.205 11:43:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.205 11:43:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:31.205 11:43:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:31.205 11:43:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:31.205 11:43:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:31.205 11:43:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:31.205 11:43:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:31.205 11:43:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:31.205 11:43:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:31.205 11:43:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:31.205 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:31.205 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@104 -- # continue 2 00:15:31.205 11:43:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:31.205 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:31.205 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:31.205 11:43:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:31.205 11:43:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@104 -- # continue 2 00:15:31.205 11:43:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:31.205 11:43:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:31.205 11:43:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:31.205 11:43:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:31.205 11:43:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:31.205 11:43:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:31.205 192.168.100.9' 00:15:31.205 11:43:01 -- nvmf/common.sh@445 -- # head -n 1 00:15:31.205 11:43:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:31.205 192.168.100.9' 00:15:31.205 11:43:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:31.205 11:43:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:31.205 192.168.100.9' 00:15:31.205 11:43:01 -- nvmf/common.sh@446 -- # tail -n +2 00:15:31.205 11:43:01 -- nvmf/common.sh@446 -- # head -n 1 00:15:31.205 11:43:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:31.205 11:43:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:31.205 11:43:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:31.205 11:43:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:31.205 11:43:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:31.205 11:43:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:31.205 11:43:01 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:31.205 11:43:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.205 11:43:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.205 11:43:01 -- common/autotest_common.sh@10 -- # set +x 00:15:31.205 11:43:01 -- nvmf/common.sh@469 -- # nvmfpid=3703030 00:15:31.205 11:43:01 -- nvmf/common.sh@470 -- # waitforlisten 3703030 00:15:31.205 11:43:01 -- common/autotest_common.sh@829 -- # '[' -z 3703030 ']' 00:15:31.205 11:43:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.205 11:43:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.205 11:43:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.205 11:43:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.205 11:43:01 -- common/autotest_common.sh@10 -- # set +x 00:15:31.205 11:43:01 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:31.205 [2024-12-03 11:43:01.744058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.205 [2024-12-03 11:43:01.744115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.205 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.205 [2024-12-03 11:43:01.813987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:31.462 [2024-12-03 11:43:01.886932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.462 [2024-12-03 11:43:01.887038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.462 [2024-12-03 11:43:01.887048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.462 [2024-12-03 11:43:01.887060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.462 [2024-12-03 11:43:01.887157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.462 [2024-12-03 11:43:01.887177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.462 [2024-12-03 11:43:01.887179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.028 11:43:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.028 11:43:02 -- common/autotest_common.sh@862 -- # return 0 00:15:32.028 11:43:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.028 11:43:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.028 11:43:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.028 11:43:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.028 11:43:02 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:32.028 11:43:02 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:32.286 [2024-12-03 11:43:02.800785] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc25860/0xc29d50) succeed. 00:15:32.286 [2024-12-03 11:43:02.809912] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc26db0/0xc6b3f0) succeed. 00:15:32.544 11:43:02 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.544 11:43:03 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:32.803 [2024-12-03 11:43:03.272294] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:32.803 11:43:03 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:33.060 11:43:03 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:33.060 Malloc0 00:15:33.060 11:43:03 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:33.318 Delay0 00:15:33.318 11:43:03 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.576 11:43:04 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:33.834 NULL1 00:15:33.835 11:43:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.835 11:43:04 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3703495 00:15:33.835 11:43:04 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:33.835 11:43:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:33.835 11:43:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.028 Read completed with error (sct=0, sc=11) 00:15:35.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.028 11:43:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:35.285 11:43:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:35.285 11:43:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:35.542 true 00:15:35.543 11:43:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:35.543 11:43:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 11:43:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.476 11:43:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:36.476 11:43:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:36.734 true 00:15:36.734 11:43:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:36.734 11:43:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 11:43:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.670 11:43:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:37.670 11:43:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:37.929 true 00:15:37.929 11:43:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:37.929 11:43:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 11:43:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:38.871 11:43:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:38.871 11:43:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:39.131 true 00:15:39.131 11:43:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:39.131 11:43:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 11:43:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.070 11:43:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:40.070 11:43:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:40.330 true 00:15:40.330 11:43:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:40.330 11:43:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.270 11:43:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.270 11:43:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:41.270 11:43:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:41.530 true 00:15:41.530 11:43:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:41.530 11:43:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.790 11:43:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.790 11:43:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:41.790 11:43:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:42.049 true 00:15:42.049 11:43:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:42.049 11:43:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 11:43:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.427 11:43:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:43.427 11:43:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:43.427 true 00:15:43.686 11:43:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:43.686 11:43:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.351 11:43:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.651 11:43:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:44.651 11:43:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:44.651 true 00:15:44.911 11:43:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:44.911 11:43:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.477 11:43:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.736 11:43:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:45.736 11:43:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:45.995 true 00:15:45.995 11:43:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:45.995 11:43:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 11:43:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.933 11:43:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:46.933 11:43:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:47.192 true 00:15:47.192 11:43:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:47.192 11:43:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 11:43:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.129 11:43:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:48.129 11:43:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:48.388 true 00:15:48.388 11:43:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:48.388 11:43:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.325 11:43:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.325 11:43:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:49.325 11:43:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:49.584 true 00:15:49.584 11:43:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:49.584 11:43:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 11:43:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.521 11:43:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:50.521 11:43:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:50.780 true 00:15:50.780 11:43:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:50.780 11:43:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 11:43:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.717 11:43:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:51.717 11:43:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:51.976 true 00:15:51.976 11:43:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:51.976 11:43:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 11:43:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.914 11:43:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:52.914 11:43:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:53.173 true 00:15:53.173 11:43:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:53.173 11:43:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 11:43:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.111 11:43:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:54.111 11:43:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:54.371 true 00:15:54.371 11:43:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:54.371 11:43:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 11:43:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.308 11:43:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:55.308 11:43:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:55.568 true 00:15:55.568 11:43:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:55.568 11:43:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 11:43:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.766 11:43:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:56.766 11:43:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:56.766 true 00:15:56.766 11:43:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:56.766 11:43:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 11:43:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.962 11:43:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:57.962 11:43:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:57.962 true 00:15:57.962 11:43:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:57.962 11:43:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 11:43:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.159 11:43:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:59.159 11:43:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:59.159 true 00:15:59.159 11:43:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:15:59.159 11:43:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 11:43:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.355 11:43:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:00.355 11:43:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:00.355 true 00:16:00.355 11:43:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:00.355 11:43:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.293 11:43:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.552 11:43:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:01.552 11:43:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:01.552 true 00:16:01.552 11:43:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:01.552 11:43:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.490 11:43:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.750 11:43:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:02.750 11:43:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:03.009 true 00:16:03.009 11:43:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:03.009 11:43:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.947 11:43:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.948 11:43:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:03.948 11:43:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:04.207 true 00:16:04.207 11:43:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:04.207 11:43:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.143 11:43:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:05.143 11:43:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:05.143 11:43:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:05.402 true 00:16:05.402 11:43:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:05.402 11:43:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.402 11:43:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:05.662 11:43:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:05.662 11:43:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:05.922 true 00:16:05.922 11:43:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:05.922 11:43:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.922 11:43:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.181 11:43:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:06.181 11:43:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:06.440 true 00:16:06.440 11:43:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:06.440 11:43:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.700 11:43:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.700 11:43:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:06.700 11:43:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:06.960 Initializing NVMe Controllers 00:16:06.960 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:06.960 Controller IO queue size 128, less than required. 00:16:06.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:06.960 Controller IO queue size 128, less than required. 00:16:06.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:06.960 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:06.960 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:06.960 Initialization complete. Launching workers. 00:16:06.960 ======================================================== 00:16:06.960 Latency(us) 00:16:06.960 Device Information : IOPS MiB/s Average min max 00:16:06.960 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5953.30 2.91 19279.99 868.22 1132348.16 00:16:06.960 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36126.57 17.64 3542.95 1530.08 281980.96 00:16:06.960 ======================================================== 00:16:06.960 Total : 42079.87 20.55 5769.37 868.22 1132348.16 00:16:06.960 00:16:06.960 true 00:16:06.960 11:43:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3703495 00:16:06.960 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3703495) - No such process 00:16:06.960 11:43:37 -- target/ns_hotplug_stress.sh@53 -- # wait 3703495 00:16:06.960 11:43:37 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.219 11:43:37 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.479 11:43:37 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:07.479 11:43:37 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:07.479 11:43:37 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:07.479 11:43:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.479 11:43:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:07.479 null0 00:16:07.479 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:07.479 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.479 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:07.739 null1 00:16:07.739 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:07.739 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.739 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:07.998 null2 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:07.998 null3 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:07.998 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:08.257 null4 00:16:08.257 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:08.257 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:08.257 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:08.517 null5 00:16:08.517 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:08.517 11:43:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:08.517 11:43:38 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:08.517 null6 00:16:08.517 11:43:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:08.517 11:43:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:08.517 11:43:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:08.777 null7 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.777 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@66 -- # wait 3709613 3709615 3709616 3709618 3709621 3709622 3709624 3709626 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.778 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.037 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.296 11:43:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:09.555 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:09.816 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.125 11:43:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.392 11:43:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:10.650 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:10.909 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:11.167 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.168 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.427 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:11.686 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:11.945 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:11.946 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.205 11:43:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:12.464 11:43:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:12.723 11:43:43 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:12.723 11:43:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.723 11:43:43 -- nvmf/common.sh@116 -- # sync 00:16:12.723 11:43:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:12.723 11:43:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:12.723 11:43:43 -- nvmf/common.sh@119 -- # set +e 00:16:12.723 11:43:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.723 11:43:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:12.723 rmmod nvme_rdma 00:16:12.723 rmmod nvme_fabrics 00:16:12.723 11:43:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.723 11:43:43 -- nvmf/common.sh@123 -- # set -e 00:16:12.723 11:43:43 -- nvmf/common.sh@124 -- # return 0 00:16:12.723 11:43:43 -- nvmf/common.sh@477 -- # '[' -n 3703030 ']' 00:16:12.723 11:43:43 -- nvmf/common.sh@478 -- # killprocess 3703030 00:16:12.723 11:43:43 -- common/autotest_common.sh@936 -- # '[' -z 3703030 ']' 00:16:12.723 11:43:43 -- common/autotest_common.sh@940 -- # kill -0 3703030 00:16:12.723 11:43:43 -- common/autotest_common.sh@941 -- # uname 00:16:12.723 11:43:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.723 11:43:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3703030 00:16:12.723 11:43:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.723 11:43:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.723 11:43:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3703030' 00:16:12.724 killing process with pid 3703030 00:16:12.724 11:43:43 -- common/autotest_common.sh@955 -- # kill 3703030 00:16:12.724 11:43:43 -- common/autotest_common.sh@960 -- # wait 3703030 00:16:12.982 11:43:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.982 11:43:43 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:12.982 00:16:12.982 real 0m48.846s 00:16:12.982 user 3m19.898s 00:16:12.982 sys 0m13.792s 00:16:12.982 11:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.982 11:43:43 -- common/autotest_common.sh@10 -- # set +x 00:16:12.982 ************************************ 00:16:12.982 END TEST nvmf_ns_hotplug_stress 00:16:12.982 ************************************ 00:16:13.242 11:43:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:13.242 11:43:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.242 11:43:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 ************************************ 00:16:13.242 START TEST nvmf_connect_stress 00:16:13.242 ************************************ 00:16:13.242 11:43:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:13.242 * Looking for test storage... 00:16:13.242 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:13.242 11:43:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:13.242 11:43:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:13.242 11:43:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:13.242 11:43:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:13.242 11:43:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:13.242 11:43:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:13.242 11:43:43 -- scripts/common.sh@335 -- # IFS=.-: 00:16:13.242 11:43:43 -- scripts/common.sh@335 -- # read -ra ver1 00:16:13.242 11:43:43 -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.242 11:43:43 -- scripts/common.sh@336 -- # read -ra ver2 00:16:13.242 11:43:43 -- scripts/common.sh@337 -- # local 'op=<' 00:16:13.242 11:43:43 -- scripts/common.sh@339 -- # ver1_l=2 00:16:13.242 11:43:43 -- scripts/common.sh@340 -- # ver2_l=1 00:16:13.242 11:43:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:13.242 11:43:43 -- scripts/common.sh@343 -- # case "$op" in 00:16:13.242 11:43:43 -- scripts/common.sh@344 -- # : 1 00:16:13.242 11:43:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:13.242 11:43:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.242 11:43:43 -- scripts/common.sh@364 -- # decimal 1 00:16:13.242 11:43:43 -- scripts/common.sh@352 -- # local d=1 00:16:13.242 11:43:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.242 11:43:43 -- scripts/common.sh@354 -- # echo 1 00:16:13.242 11:43:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:13.242 11:43:43 -- scripts/common.sh@365 -- # decimal 2 00:16:13.242 11:43:43 -- scripts/common.sh@352 -- # local d=2 00:16:13.242 11:43:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.242 11:43:43 -- scripts/common.sh@354 -- # echo 2 00:16:13.242 11:43:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:13.242 11:43:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:13.242 11:43:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:13.242 11:43:43 -- scripts/common.sh@367 -- # return 0 00:16:13.242 11:43:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.242 --rc genhtml_branch_coverage=1 00:16:13.242 --rc genhtml_function_coverage=1 00:16:13.242 --rc genhtml_legend=1 00:16:13.242 --rc geninfo_all_blocks=1 00:16:13.242 --rc geninfo_unexecuted_blocks=1 00:16:13.242 00:16:13.242 ' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.242 --rc genhtml_branch_coverage=1 00:16:13.242 --rc genhtml_function_coverage=1 00:16:13.242 --rc genhtml_legend=1 00:16:13.242 --rc geninfo_all_blocks=1 00:16:13.242 --rc geninfo_unexecuted_blocks=1 00:16:13.242 00:16:13.242 ' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.242 --rc genhtml_branch_coverage=1 00:16:13.242 --rc genhtml_function_coverage=1 00:16:13.242 --rc genhtml_legend=1 00:16:13.242 --rc geninfo_all_blocks=1 00:16:13.242 --rc geninfo_unexecuted_blocks=1 00:16:13.242 00:16:13.242 ' 00:16:13.242 11:43:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:13.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.242 --rc genhtml_branch_coverage=1 00:16:13.243 --rc genhtml_function_coverage=1 00:16:13.243 --rc genhtml_legend=1 00:16:13.243 --rc geninfo_all_blocks=1 00:16:13.243 --rc geninfo_unexecuted_blocks=1 00:16:13.243 00:16:13.243 ' 00:16:13.243 11:43:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.243 11:43:43 -- nvmf/common.sh@7 -- # uname -s 00:16:13.243 11:43:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.243 11:43:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.243 11:43:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.243 11:43:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.243 11:43:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.243 11:43:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.243 11:43:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.243 11:43:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.243 11:43:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.243 11:43:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.243 11:43:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:13.243 11:43:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:13.243 11:43:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.243 11:43:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.243 11:43:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.243 11:43:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:13.243 11:43:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.243 11:43:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.243 11:43:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.243 11:43:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.243 11:43:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.243 11:43:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.243 11:43:43 -- paths/export.sh@5 -- # export PATH 00:16:13.243 11:43:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.243 11:43:43 -- nvmf/common.sh@46 -- # : 0 00:16:13.243 11:43:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:13.243 11:43:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:13.243 11:43:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:13.243 11:43:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.243 11:43:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.243 11:43:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:13.243 11:43:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:13.243 11:43:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:13.243 11:43:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:13.243 11:43:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:13.243 11:43:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.243 11:43:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:13.243 11:43:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:13.243 11:43:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:13.243 11:43:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.243 11:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.243 11:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.243 11:43:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:13.243 11:43:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:13.243 11:43:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:13.243 11:43:43 -- common/autotest_common.sh@10 -- # set +x 00:16:19.824 11:43:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:19.824 11:43:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:19.824 11:43:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:19.824 11:43:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:19.824 11:43:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:19.824 11:43:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:19.824 11:43:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:19.824 11:43:50 -- nvmf/common.sh@294 -- # net_devs=() 00:16:19.824 11:43:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:19.824 11:43:50 -- nvmf/common.sh@295 -- # e810=() 00:16:19.824 11:43:50 -- nvmf/common.sh@295 -- # local -ga e810 00:16:19.824 11:43:50 -- nvmf/common.sh@296 -- # x722=() 00:16:19.824 11:43:50 -- nvmf/common.sh@296 -- # local -ga x722 00:16:19.824 11:43:50 -- nvmf/common.sh@297 -- # mlx=() 00:16:19.824 11:43:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:19.824 11:43:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.824 11:43:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:19.824 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:19.824 11:43:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:19.824 11:43:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:19.824 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:19.824 11:43:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:19.824 11:43:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.824 11:43:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.824 11:43:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:19.824 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:19.824 11:43:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.824 11:43:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.824 11:43:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:19.824 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:19.824 11:43:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.824 11:43:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:19.824 11:43:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:19.824 11:43:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:19.824 11:43:50 -- nvmf/common.sh@57 -- # uname 00:16:19.824 11:43:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:19.824 11:43:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:19.824 11:43:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:19.824 11:43:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:19.824 11:43:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:19.824 11:43:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:19.824 11:43:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:19.824 11:43:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:19.824 11:43:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:19.824 11:43:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:19.824 11:43:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:19.824 11:43:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:19.824 11:43:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:19.824 11:43:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:19.824 11:43:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:19.824 11:43:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:19.824 11:43:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:19.824 11:43:50 -- nvmf/common.sh@104 -- # continue 2 00:16:19.824 11:43:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.824 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:19.824 11:43:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:19.824 11:43:50 -- nvmf/common.sh@104 -- # continue 2 00:16:19.824 11:43:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:19.824 11:43:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:19.824 11:43:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:19.825 11:43:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:19.825 11:43:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:19.825 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:19.825 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:19.825 altname enp217s0f0np0 00:16:19.825 altname ens818f0np0 00:16:19.825 inet 192.168.100.8/24 scope global mlx_0_0 00:16:19.825 valid_lft forever preferred_lft forever 00:16:19.825 11:43:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:19.825 11:43:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:19.825 11:43:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:19.825 11:43:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:19.825 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:19.825 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:19.825 altname enp217s0f1np1 00:16:19.825 altname ens818f1np1 00:16:19.825 inet 192.168.100.9/24 scope global mlx_0_1 00:16:19.825 valid_lft forever preferred_lft forever 00:16:19.825 11:43:50 -- nvmf/common.sh@410 -- # return 0 00:16:19.825 11:43:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:19.825 11:43:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:19.825 11:43:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:19.825 11:43:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:19.825 11:43:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:19.825 11:43:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:19.825 11:43:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:19.825 11:43:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:19.825 11:43:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:19.825 11:43:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:19.825 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.825 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@104 -- # continue 2 00:16:19.825 11:43:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:19.825 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.825 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:19.825 11:43:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:19.825 11:43:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@104 -- # continue 2 00:16:19.825 11:43:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:19.825 11:43:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:19.825 11:43:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:19.825 11:43:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:19.825 11:43:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:19.825 11:43:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:19.825 192.168.100.9' 00:16:19.825 11:43:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:19.825 192.168.100.9' 00:16:19.825 11:43:50 -- nvmf/common.sh@445 -- # head -n 1 00:16:19.825 11:43:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:19.825 11:43:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:19.825 192.168.100.9' 00:16:19.825 11:43:50 -- nvmf/common.sh@446 -- # tail -n +2 00:16:19.825 11:43:50 -- nvmf/common.sh@446 -- # head -n 1 00:16:19.825 11:43:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:19.825 11:43:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:19.825 11:43:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:19.825 11:43:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:19.825 11:43:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:19.825 11:43:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:19.825 11:43:50 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:19.825 11:43:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:19.825 11:43:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.825 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:16:20.084 11:43:50 -- nvmf/common.sh@469 -- # nvmfpid=3713770 00:16:20.084 11:43:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:20.084 11:43:50 -- nvmf/common.sh@470 -- # waitforlisten 3713770 00:16:20.084 11:43:50 -- common/autotest_common.sh@829 -- # '[' -z 3713770 ']' 00:16:20.084 11:43:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.084 11:43:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.084 11:43:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.084 11:43:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.084 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:16:20.084 [2024-12-03 11:43:50.485817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.084 [2024-12-03 11:43:50.485882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.084 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.084 [2024-12-03 11:43:50.558772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.084 [2024-12-03 11:43:50.627130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.084 [2024-12-03 11:43:50.627242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.084 [2024-12-03 11:43:50.627251] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.084 [2024-12-03 11:43:50.627260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.084 [2024-12-03 11:43:50.627361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.084 [2024-12-03 11:43:50.627445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.084 [2024-12-03 11:43:50.627447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.020 11:43:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.020 11:43:51 -- common/autotest_common.sh@862 -- # return 0 00:16:21.020 11:43:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:21.020 11:43:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.020 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.020 11:43:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.020 11:43:51 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:21.020 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.020 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.020 [2024-12-03 11:43:51.356587] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cdb860/0x1cdfd50) succeed. 00:16:21.020 [2024-12-03 11:43:51.365611] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cdcdb0/0x1d213f0) succeed. 00:16:21.020 11:43:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.020 11:43:51 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:21.020 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.020 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.020 11:43:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.020 11:43:51 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:21.020 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.020 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.020 [2024-12-03 11:43:51.476953] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:21.020 11:43:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.020 11:43:51 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:21.020 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.020 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.020 NULL1 00:16:21.020 11:43:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.020 11:43:51 -- target/connect_stress.sh@21 -- # PERF_PID=3714058 00:16:21.020 11:43:51 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:21.020 11:43:51 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:21.020 11:43:51 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.020 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.020 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.020 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.020 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.020 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.020 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.020 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:21.021 11:43:51 -- target/connect_stress.sh@28 -- # cat 00:16:21.021 11:43:51 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:21.021 11:43:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.021 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.021 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.588 11:43:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.588 11:43:51 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:21.588 11:43:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.588 11:43:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.588 11:43:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.847 11:43:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.847 11:43:52 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:21.847 11:43:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.847 11:43:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.847 11:43:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.106 11:43:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.106 11:43:52 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:22.106 11:43:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.106 11:43:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.106 11:43:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.365 11:43:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.365 11:43:52 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:22.365 11:43:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.365 11:43:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.365 11:43:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.624 11:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.624 11:43:53 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:22.624 11:43:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.624 11:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.624 11:43:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.191 11:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.191 11:43:53 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:23.191 11:43:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.191 11:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.191 11:43:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.450 11:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.450 11:43:53 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:23.450 11:43:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.450 11:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.450 11:43:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.708 11:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.708 11:43:54 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:23.708 11:43:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.708 11:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.708 11:43:54 -- common/autotest_common.sh@10 -- # set +x 00:16:23.966 11:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.966 11:43:54 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:23.966 11:43:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.966 11:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.966 11:43:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.538 11:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.538 11:43:54 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:24.538 11:43:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.538 11:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.539 11:43:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.807 11:43:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.807 11:43:55 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:24.807 11:43:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.807 11:43:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.807 11:43:55 -- common/autotest_common.sh@10 -- # set +x 00:16:25.065 11:43:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.065 11:43:55 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:25.065 11:43:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.065 11:43:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.065 11:43:55 -- common/autotest_common.sh@10 -- # set +x 00:16:25.324 11:43:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.324 11:43:55 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:25.324 11:43:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.324 11:43:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.324 11:43:55 -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 11:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.583 11:43:56 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:25.583 11:43:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.583 11:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.583 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:16:26.152 11:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.152 11:43:56 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:26.152 11:43:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.152 11:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.152 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:16:26.423 11:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.423 11:43:56 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:26.423 11:43:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.423 11:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.423 11:43:56 -- common/autotest_common.sh@10 -- # set +x 00:16:26.698 11:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.698 11:43:57 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:26.698 11:43:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.698 11:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.698 11:43:57 -- common/autotest_common.sh@10 -- # set +x 00:16:26.956 11:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.956 11:43:57 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:26.956 11:43:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.956 11:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.956 11:43:57 -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 11:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 11:43:57 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:27.214 11:43:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.214 11:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 11:43:57 -- common/autotest_common.sh@10 -- # set +x 00:16:27.473 11:43:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.473 11:43:58 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:27.473 11:43:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.473 11:43:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.473 11:43:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.041 11:43:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.041 11:43:58 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:28.041 11:43:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.041 11:43:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.041 11:43:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.312 11:43:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.312 11:43:58 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:28.312 11:43:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.312 11:43:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.312 11:43:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.571 11:43:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.571 11:43:59 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:28.571 11:43:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.571 11:43:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.571 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:16:28.830 11:43:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.830 11:43:59 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:28.830 11:43:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.830 11:43:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.830 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:16:29.397 11:43:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.397 11:43:59 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:29.397 11:43:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.397 11:43:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.397 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:16:29.656 11:44:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.656 11:44:00 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:29.656 11:44:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.656 11:44:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.656 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:16:29.914 11:44:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.914 11:44:00 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:29.914 11:44:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.914 11:44:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.914 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.173 11:44:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.173 11:44:00 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:30.173 11:44:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.173 11:44:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.173 11:44:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.432 11:44:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.432 11:44:01 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:30.432 11:44:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.432 11:44:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.432 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:16:30.999 11:44:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.999 11:44:01 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:30.999 11:44:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.999 11:44:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.999 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 11:44:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.258 11:44:01 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:31.258 11:44:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.258 11:44:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.258 11:44:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:31.516 11:44:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.516 11:44:01 -- target/connect_stress.sh@34 -- # kill -0 3714058 00:16:31.516 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3714058) - No such process 00:16:31.516 11:44:01 -- target/connect_stress.sh@38 -- # wait 3714058 00:16:31.516 11:44:01 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:31.516 11:44:02 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:31.516 11:44:02 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:31.516 11:44:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:31.516 11:44:02 -- nvmf/common.sh@116 -- # sync 00:16:31.516 11:44:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:31.516 11:44:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:31.516 11:44:02 -- nvmf/common.sh@119 -- # set +e 00:16:31.516 11:44:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:31.516 11:44:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:31.516 rmmod nvme_rdma 00:16:31.516 rmmod nvme_fabrics 00:16:31.516 11:44:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:31.516 11:44:02 -- nvmf/common.sh@123 -- # set -e 00:16:31.516 11:44:02 -- nvmf/common.sh@124 -- # return 0 00:16:31.516 11:44:02 -- nvmf/common.sh@477 -- # '[' -n 3713770 ']' 00:16:31.516 11:44:02 -- nvmf/common.sh@478 -- # killprocess 3713770 00:16:31.516 11:44:02 -- common/autotest_common.sh@936 -- # '[' -z 3713770 ']' 00:16:31.516 11:44:02 -- common/autotest_common.sh@940 -- # kill -0 3713770 00:16:31.516 11:44:02 -- common/autotest_common.sh@941 -- # uname 00:16:31.516 11:44:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.516 11:44:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3713770 00:16:31.516 11:44:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:31.516 11:44:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:31.516 11:44:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3713770' 00:16:31.516 killing process with pid 3713770 00:16:31.516 11:44:02 -- common/autotest_common.sh@955 -- # kill 3713770 00:16:31.516 11:44:02 -- common/autotest_common.sh@960 -- # wait 3713770 00:16:32.087 11:44:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:32.087 11:44:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:32.087 00:16:32.087 real 0m18.787s 00:16:32.087 user 0m42.756s 00:16:32.087 sys 0m7.508s 00:16:32.087 11:44:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:32.087 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:16:32.087 ************************************ 00:16:32.087 END TEST nvmf_connect_stress 00:16:32.087 ************************************ 00:16:32.087 11:44:02 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:32.087 11:44:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.087 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:16:32.087 ************************************ 00:16:32.087 START TEST nvmf_fused_ordering 00:16:32.087 ************************************ 00:16:32.087 11:44:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:32.087 * Looking for test storage... 00:16:32.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:32.087 11:44:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:32.087 11:44:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:32.087 11:44:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:32.087 11:44:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:32.087 11:44:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:32.087 11:44:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:32.087 11:44:02 -- scripts/common.sh@335 -- # IFS=.-: 00:16:32.087 11:44:02 -- scripts/common.sh@335 -- # read -ra ver1 00:16:32.087 11:44:02 -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.087 11:44:02 -- scripts/common.sh@336 -- # read -ra ver2 00:16:32.087 11:44:02 -- scripts/common.sh@337 -- # local 'op=<' 00:16:32.087 11:44:02 -- scripts/common.sh@339 -- # ver1_l=2 00:16:32.087 11:44:02 -- scripts/common.sh@340 -- # ver2_l=1 00:16:32.087 11:44:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:32.087 11:44:02 -- scripts/common.sh@343 -- # case "$op" in 00:16:32.087 11:44:02 -- scripts/common.sh@344 -- # : 1 00:16:32.087 11:44:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:32.087 11:44:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.087 11:44:02 -- scripts/common.sh@364 -- # decimal 1 00:16:32.087 11:44:02 -- scripts/common.sh@352 -- # local d=1 00:16:32.087 11:44:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.087 11:44:02 -- scripts/common.sh@354 -- # echo 1 00:16:32.087 11:44:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:32.087 11:44:02 -- scripts/common.sh@365 -- # decimal 2 00:16:32.087 11:44:02 -- scripts/common.sh@352 -- # local d=2 00:16:32.087 11:44:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.087 11:44:02 -- scripts/common.sh@354 -- # echo 2 00:16:32.087 11:44:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:32.087 11:44:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:32.087 11:44:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:32.087 11:44:02 -- scripts/common.sh@367 -- # return 0 00:16:32.087 11:44:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.087 --rc genhtml_branch_coverage=1 00:16:32.087 --rc genhtml_function_coverage=1 00:16:32.087 --rc genhtml_legend=1 00:16:32.087 --rc geninfo_all_blocks=1 00:16:32.087 --rc geninfo_unexecuted_blocks=1 00:16:32.087 00:16:32.087 ' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.087 --rc genhtml_branch_coverage=1 00:16:32.087 --rc genhtml_function_coverage=1 00:16:32.087 --rc genhtml_legend=1 00:16:32.087 --rc geninfo_all_blocks=1 00:16:32.087 --rc geninfo_unexecuted_blocks=1 00:16:32.087 00:16:32.087 ' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.087 --rc genhtml_branch_coverage=1 00:16:32.087 --rc genhtml_function_coverage=1 00:16:32.087 --rc genhtml_legend=1 00:16:32.087 --rc geninfo_all_blocks=1 00:16:32.087 --rc geninfo_unexecuted_blocks=1 00:16:32.087 00:16:32.087 ' 00:16:32.087 11:44:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:32.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.087 --rc genhtml_branch_coverage=1 00:16:32.087 --rc genhtml_function_coverage=1 00:16:32.087 --rc genhtml_legend=1 00:16:32.087 --rc geninfo_all_blocks=1 00:16:32.087 --rc geninfo_unexecuted_blocks=1 00:16:32.087 00:16:32.087 ' 00:16:32.087 11:44:02 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.087 11:44:02 -- nvmf/common.sh@7 -- # uname -s 00:16:32.087 11:44:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.087 11:44:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.087 11:44:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.087 11:44:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.087 11:44:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.087 11:44:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.087 11:44:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.087 11:44:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.087 11:44:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.087 11:44:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.087 11:44:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:32.087 11:44:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:32.087 11:44:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.087 11:44:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.087 11:44:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.087 11:44:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:32.087 11:44:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.087 11:44:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.087 11:44:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.087 11:44:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.087 11:44:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.088 11:44:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.088 11:44:02 -- paths/export.sh@5 -- # export PATH 00:16:32.088 11:44:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.088 11:44:02 -- nvmf/common.sh@46 -- # : 0 00:16:32.088 11:44:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:32.088 11:44:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:32.088 11:44:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:32.088 11:44:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.088 11:44:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.088 11:44:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:32.088 11:44:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:32.088 11:44:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:32.088 11:44:02 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:32.088 11:44:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:32.088 11:44:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.088 11:44:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:32.088 11:44:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:32.088 11:44:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:32.088 11:44:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.088 11:44:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.088 11:44:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.088 11:44:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:32.088 11:44:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:32.088 11:44:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:32.088 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:16:38.662 11:44:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:38.662 11:44:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:38.662 11:44:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:38.662 11:44:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:38.662 11:44:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:38.662 11:44:09 -- nvmf/common.sh@294 -- # net_devs=() 00:16:38.662 11:44:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@295 -- # e810=() 00:16:38.662 11:44:09 -- nvmf/common.sh@295 -- # local -ga e810 00:16:38.662 11:44:09 -- nvmf/common.sh@296 -- # x722=() 00:16:38.662 11:44:09 -- nvmf/common.sh@296 -- # local -ga x722 00:16:38.662 11:44:09 -- nvmf/common.sh@297 -- # mlx=() 00:16:38.662 11:44:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:38.662 11:44:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.662 11:44:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:38.662 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:38.662 11:44:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:38.662 11:44:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:38.662 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:38.662 11:44:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:38.662 11:44:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.662 11:44:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.662 11:44:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:38.662 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.662 11:44:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.662 11:44:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:38.662 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:38.662 11:44:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.662 11:44:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:38.662 11:44:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:38.662 11:44:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:38.662 11:44:09 -- nvmf/common.sh@57 -- # uname 00:16:38.662 11:44:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:38.662 11:44:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:38.662 11:44:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:38.662 11:44:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:38.662 11:44:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:38.662 11:44:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:38.662 11:44:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:38.662 11:44:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:38.662 11:44:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:38.662 11:44:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:38.662 11:44:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:38.662 11:44:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:38.662 11:44:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:38.662 11:44:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@104 -- # continue 2 00:16:38.662 11:44:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:38.662 11:44:09 -- nvmf/common.sh@104 -- # continue 2 00:16:38.662 11:44:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:38.662 11:44:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:38.662 11:44:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:38.662 11:44:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:38.662 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:38.662 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:38.662 altname enp217s0f0np0 00:16:38.662 altname ens818f0np0 00:16:38.662 inet 192.168.100.8/24 scope global mlx_0_0 00:16:38.662 valid_lft forever preferred_lft forever 00:16:38.662 11:44:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:38.662 11:44:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:38.662 11:44:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:38.662 11:44:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:38.662 11:44:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:38.662 11:44:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:38.662 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:38.662 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:38.662 altname enp217s0f1np1 00:16:38.662 altname ens818f1np1 00:16:38.662 inet 192.168.100.9/24 scope global mlx_0_1 00:16:38.662 valid_lft forever preferred_lft forever 00:16:38.662 11:44:09 -- nvmf/common.sh@410 -- # return 0 00:16:38.662 11:44:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:38.662 11:44:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:38.662 11:44:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:38.662 11:44:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:38.662 11:44:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:38.662 11:44:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:38.662 11:44:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:38.662 11:44:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:38.662 11:44:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.662 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:38.662 11:44:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:38.662 11:44:09 -- nvmf/common.sh@104 -- # continue 2 00:16:38.663 11:44:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:38.663 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.663 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:38.663 11:44:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:38.663 11:44:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:38.663 11:44:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:38.663 11:44:09 -- nvmf/common.sh@104 -- # continue 2 00:16:38.663 11:44:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:38.663 11:44:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:38.663 11:44:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:38.663 11:44:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:38.663 11:44:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:38.663 11:44:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:38.663 11:44:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:38.663 11:44:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:38.663 192.168.100.9' 00:16:38.663 11:44:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:38.663 192.168.100.9' 00:16:38.663 11:44:09 -- nvmf/common.sh@445 -- # head -n 1 00:16:38.663 11:44:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:38.663 11:44:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:38.663 192.168.100.9' 00:16:38.663 11:44:09 -- nvmf/common.sh@446 -- # tail -n +2 00:16:38.663 11:44:09 -- nvmf/common.sh@446 -- # head -n 1 00:16:38.663 11:44:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:38.663 11:44:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:38.663 11:44:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:38.663 11:44:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:38.663 11:44:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:38.663 11:44:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:38.663 11:44:09 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:38.663 11:44:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:38.663 11:44:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.663 11:44:09 -- common/autotest_common.sh@10 -- # set +x 00:16:38.663 11:44:09 -- nvmf/common.sh@469 -- # nvmfpid=3719144 00:16:38.663 11:44:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:38.663 11:44:09 -- nvmf/common.sh@470 -- # waitforlisten 3719144 00:16:38.663 11:44:09 -- common/autotest_common.sh@829 -- # '[' -z 3719144 ']' 00:16:38.663 11:44:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.663 11:44:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.663 11:44:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.663 11:44:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.663 11:44:09 -- common/autotest_common.sh@10 -- # set +x 00:16:38.922 [2024-12-03 11:44:09.306855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.922 [2024-12-03 11:44:09.306903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.922 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.922 [2024-12-03 11:44:09.376077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.922 [2024-12-03 11:44:09.443381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.922 [2024-12-03 11:44:09.443493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.922 [2024-12-03 11:44:09.443503] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.922 [2024-12-03 11:44:09.443511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.922 [2024-12-03 11:44:09.443532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.858 11:44:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.858 11:44:10 -- common/autotest_common.sh@862 -- # return 0 00:16:39.858 11:44:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:39.858 11:44:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 11:44:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.858 11:44:10 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 [2024-12-03 11:44:10.199340] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x166f230/0x1673720) succeed. 00:16:39.858 [2024-12-03 11:44:10.208474] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1670730/0x16b4dc0) succeed. 00:16:39.858 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.858 11:44:10 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.858 11:44:10 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 [2024-12-03 11:44:10.273935] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:39.858 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.858 11:44:10 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 NULL1 00:16:39.858 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.858 11:44:10 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.858 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.858 11:44:10 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:39.858 11:44:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.858 11:44:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.859 11:44:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.859 11:44:10 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:39.859 [2024-12-03 11:44:10.329853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.859 [2024-12-03 11:44:10.329889] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719428 ] 00:16:39.859 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.118 Attached to nqn.2016-06.io.spdk:cnode1 00:16:40.118 Namespace ID: 1 size: 1GB 00:16:40.118 fused_ordering(0) 00:16:40.118 fused_ordering(1) 00:16:40.118 fused_ordering(2) 00:16:40.118 fused_ordering(3) 00:16:40.118 fused_ordering(4) 00:16:40.118 fused_ordering(5) 00:16:40.118 fused_ordering(6) 00:16:40.118 fused_ordering(7) 00:16:40.118 fused_ordering(8) 00:16:40.118 fused_ordering(9) 00:16:40.118 fused_ordering(10) 00:16:40.118 fused_ordering(11) 00:16:40.118 fused_ordering(12) 00:16:40.118 fused_ordering(13) 00:16:40.118 fused_ordering(14) 00:16:40.118 fused_ordering(15) 00:16:40.118 fused_ordering(16) 00:16:40.119 fused_ordering(17) 00:16:40.119 fused_ordering(18) 00:16:40.119 fused_ordering(19) 00:16:40.119 fused_ordering(20) 00:16:40.119 fused_ordering(21) 00:16:40.119 fused_ordering(22) 00:16:40.119 fused_ordering(23) 00:16:40.119 fused_ordering(24) 00:16:40.119 fused_ordering(25) 00:16:40.119 fused_ordering(26) 00:16:40.119 fused_ordering(27) 00:16:40.119 fused_ordering(28) 00:16:40.119 fused_ordering(29) 00:16:40.119 fused_ordering(30) 00:16:40.119 fused_ordering(31) 00:16:40.119 fused_ordering(32) 00:16:40.119 fused_ordering(33) 00:16:40.119 fused_ordering(34) 00:16:40.119 fused_ordering(35) 00:16:40.119 fused_ordering(36) 00:16:40.119 fused_ordering(37) 00:16:40.119 fused_ordering(38) 00:16:40.119 fused_ordering(39) 00:16:40.119 fused_ordering(40) 00:16:40.119 fused_ordering(41) 00:16:40.119 fused_ordering(42) 00:16:40.119 fused_ordering(43) 00:16:40.119 fused_ordering(44) 00:16:40.119 fused_ordering(45) 00:16:40.119 fused_ordering(46) 00:16:40.119 fused_ordering(47) 00:16:40.119 fused_ordering(48) 00:16:40.119 fused_ordering(49) 00:16:40.119 fused_ordering(50) 00:16:40.119 fused_ordering(51) 00:16:40.119 fused_ordering(52) 00:16:40.119 fused_ordering(53) 00:16:40.119 fused_ordering(54) 00:16:40.119 fused_ordering(55) 00:16:40.119 fused_ordering(56) 00:16:40.119 fused_ordering(57) 00:16:40.119 fused_ordering(58) 00:16:40.119 fused_ordering(59) 00:16:40.119 fused_ordering(60) 00:16:40.119 fused_ordering(61) 00:16:40.119 fused_ordering(62) 00:16:40.119 fused_ordering(63) 00:16:40.119 fused_ordering(64) 00:16:40.119 fused_ordering(65) 00:16:40.119 fused_ordering(66) 00:16:40.119 fused_ordering(67) 00:16:40.119 fused_ordering(68) 00:16:40.119 fused_ordering(69) 00:16:40.119 fused_ordering(70) 00:16:40.119 fused_ordering(71) 00:16:40.119 fused_ordering(72) 00:16:40.119 fused_ordering(73) 00:16:40.119 fused_ordering(74) 00:16:40.119 fused_ordering(75) 00:16:40.119 fused_ordering(76) 00:16:40.119 fused_ordering(77) 00:16:40.119 fused_ordering(78) 00:16:40.119 fused_ordering(79) 00:16:40.119 fused_ordering(80) 00:16:40.119 fused_ordering(81) 00:16:40.119 fused_ordering(82) 00:16:40.119 fused_ordering(83) 00:16:40.119 fused_ordering(84) 00:16:40.119 fused_ordering(85) 00:16:40.119 fused_ordering(86) 00:16:40.119 fused_ordering(87) 00:16:40.119 fused_ordering(88) 00:16:40.119 fused_ordering(89) 00:16:40.119 fused_ordering(90) 00:16:40.119 fused_ordering(91) 00:16:40.119 fused_ordering(92) 00:16:40.119 fused_ordering(93) 00:16:40.119 fused_ordering(94) 00:16:40.119 fused_ordering(95) 00:16:40.119 fused_ordering(96) 00:16:40.119 fused_ordering(97) 00:16:40.119 fused_ordering(98) 00:16:40.119 fused_ordering(99) 00:16:40.119 fused_ordering(100) 00:16:40.119 fused_ordering(101) 00:16:40.119 fused_ordering(102) 00:16:40.119 fused_ordering(103) 00:16:40.119 fused_ordering(104) 00:16:40.119 fused_ordering(105) 00:16:40.119 fused_ordering(106) 00:16:40.119 fused_ordering(107) 00:16:40.119 fused_ordering(108) 00:16:40.119 fused_ordering(109) 00:16:40.119 fused_ordering(110) 00:16:40.119 fused_ordering(111) 00:16:40.119 fused_ordering(112) 00:16:40.119 fused_ordering(113) 00:16:40.119 fused_ordering(114) 00:16:40.119 fused_ordering(115) 00:16:40.119 fused_ordering(116) 00:16:40.119 fused_ordering(117) 00:16:40.119 fused_ordering(118) 00:16:40.119 fused_ordering(119) 00:16:40.119 fused_ordering(120) 00:16:40.119 fused_ordering(121) 00:16:40.119 fused_ordering(122) 00:16:40.119 fused_ordering(123) 00:16:40.119 fused_ordering(124) 00:16:40.119 fused_ordering(125) 00:16:40.119 fused_ordering(126) 00:16:40.119 fused_ordering(127) 00:16:40.119 fused_ordering(128) 00:16:40.119 fused_ordering(129) 00:16:40.119 fused_ordering(130) 00:16:40.119 fused_ordering(131) 00:16:40.119 fused_ordering(132) 00:16:40.119 fused_ordering(133) 00:16:40.119 fused_ordering(134) 00:16:40.119 fused_ordering(135) 00:16:40.119 fused_ordering(136) 00:16:40.119 fused_ordering(137) 00:16:40.119 fused_ordering(138) 00:16:40.119 fused_ordering(139) 00:16:40.119 fused_ordering(140) 00:16:40.119 fused_ordering(141) 00:16:40.119 fused_ordering(142) 00:16:40.119 fused_ordering(143) 00:16:40.119 fused_ordering(144) 00:16:40.119 fused_ordering(145) 00:16:40.119 fused_ordering(146) 00:16:40.119 fused_ordering(147) 00:16:40.119 fused_ordering(148) 00:16:40.119 fused_ordering(149) 00:16:40.119 fused_ordering(150) 00:16:40.119 fused_ordering(151) 00:16:40.119 fused_ordering(152) 00:16:40.119 fused_ordering(153) 00:16:40.119 fused_ordering(154) 00:16:40.119 fused_ordering(155) 00:16:40.119 fused_ordering(156) 00:16:40.119 fused_ordering(157) 00:16:40.119 fused_ordering(158) 00:16:40.119 fused_ordering(159) 00:16:40.119 fused_ordering(160) 00:16:40.119 fused_ordering(161) 00:16:40.119 fused_ordering(162) 00:16:40.119 fused_ordering(163) 00:16:40.119 fused_ordering(164) 00:16:40.119 fused_ordering(165) 00:16:40.119 fused_ordering(166) 00:16:40.119 fused_ordering(167) 00:16:40.119 fused_ordering(168) 00:16:40.119 fused_ordering(169) 00:16:40.119 fused_ordering(170) 00:16:40.119 fused_ordering(171) 00:16:40.119 fused_ordering(172) 00:16:40.119 fused_ordering(173) 00:16:40.119 fused_ordering(174) 00:16:40.119 fused_ordering(175) 00:16:40.119 fused_ordering(176) 00:16:40.119 fused_ordering(177) 00:16:40.119 fused_ordering(178) 00:16:40.119 fused_ordering(179) 00:16:40.119 fused_ordering(180) 00:16:40.119 fused_ordering(181) 00:16:40.119 fused_ordering(182) 00:16:40.119 fused_ordering(183) 00:16:40.119 fused_ordering(184) 00:16:40.119 fused_ordering(185) 00:16:40.119 fused_ordering(186) 00:16:40.119 fused_ordering(187) 00:16:40.119 fused_ordering(188) 00:16:40.119 fused_ordering(189) 00:16:40.119 fused_ordering(190) 00:16:40.119 fused_ordering(191) 00:16:40.119 fused_ordering(192) 00:16:40.119 fused_ordering(193) 00:16:40.119 fused_ordering(194) 00:16:40.119 fused_ordering(195) 00:16:40.119 fused_ordering(196) 00:16:40.119 fused_ordering(197) 00:16:40.119 fused_ordering(198) 00:16:40.119 fused_ordering(199) 00:16:40.119 fused_ordering(200) 00:16:40.119 fused_ordering(201) 00:16:40.119 fused_ordering(202) 00:16:40.119 fused_ordering(203) 00:16:40.119 fused_ordering(204) 00:16:40.119 fused_ordering(205) 00:16:40.119 fused_ordering(206) 00:16:40.119 fused_ordering(207) 00:16:40.119 fused_ordering(208) 00:16:40.119 fused_ordering(209) 00:16:40.119 fused_ordering(210) 00:16:40.119 fused_ordering(211) 00:16:40.119 fused_ordering(212) 00:16:40.119 fused_ordering(213) 00:16:40.119 fused_ordering(214) 00:16:40.119 fused_ordering(215) 00:16:40.119 fused_ordering(216) 00:16:40.119 fused_ordering(217) 00:16:40.119 fused_ordering(218) 00:16:40.119 fused_ordering(219) 00:16:40.119 fused_ordering(220) 00:16:40.119 fused_ordering(221) 00:16:40.119 fused_ordering(222) 00:16:40.119 fused_ordering(223) 00:16:40.119 fused_ordering(224) 00:16:40.119 fused_ordering(225) 00:16:40.119 fused_ordering(226) 00:16:40.119 fused_ordering(227) 00:16:40.119 fused_ordering(228) 00:16:40.119 fused_ordering(229) 00:16:40.119 fused_ordering(230) 00:16:40.119 fused_ordering(231) 00:16:40.119 fused_ordering(232) 00:16:40.119 fused_ordering(233) 00:16:40.119 fused_ordering(234) 00:16:40.119 fused_ordering(235) 00:16:40.119 fused_ordering(236) 00:16:40.119 fused_ordering(237) 00:16:40.119 fused_ordering(238) 00:16:40.119 fused_ordering(239) 00:16:40.119 fused_ordering(240) 00:16:40.119 fused_ordering(241) 00:16:40.119 fused_ordering(242) 00:16:40.119 fused_ordering(243) 00:16:40.119 fused_ordering(244) 00:16:40.119 fused_ordering(245) 00:16:40.119 fused_ordering(246) 00:16:40.119 fused_ordering(247) 00:16:40.119 fused_ordering(248) 00:16:40.119 fused_ordering(249) 00:16:40.119 fused_ordering(250) 00:16:40.119 fused_ordering(251) 00:16:40.119 fused_ordering(252) 00:16:40.119 fused_ordering(253) 00:16:40.119 fused_ordering(254) 00:16:40.119 fused_ordering(255) 00:16:40.119 fused_ordering(256) 00:16:40.119 fused_ordering(257) 00:16:40.119 fused_ordering(258) 00:16:40.119 fused_ordering(259) 00:16:40.119 fused_ordering(260) 00:16:40.119 fused_ordering(261) 00:16:40.119 fused_ordering(262) 00:16:40.119 fused_ordering(263) 00:16:40.119 fused_ordering(264) 00:16:40.119 fused_ordering(265) 00:16:40.119 fused_ordering(266) 00:16:40.119 fused_ordering(267) 00:16:40.119 fused_ordering(268) 00:16:40.119 fused_ordering(269) 00:16:40.119 fused_ordering(270) 00:16:40.119 fused_ordering(271) 00:16:40.119 fused_ordering(272) 00:16:40.119 fused_ordering(273) 00:16:40.119 fused_ordering(274) 00:16:40.119 fused_ordering(275) 00:16:40.119 fused_ordering(276) 00:16:40.119 fused_ordering(277) 00:16:40.119 fused_ordering(278) 00:16:40.119 fused_ordering(279) 00:16:40.119 fused_ordering(280) 00:16:40.119 fused_ordering(281) 00:16:40.119 fused_ordering(282) 00:16:40.119 fused_ordering(283) 00:16:40.119 fused_ordering(284) 00:16:40.119 fused_ordering(285) 00:16:40.119 fused_ordering(286) 00:16:40.119 fused_ordering(287) 00:16:40.119 fused_ordering(288) 00:16:40.119 fused_ordering(289) 00:16:40.119 fused_ordering(290) 00:16:40.120 fused_ordering(291) 00:16:40.120 fused_ordering(292) 00:16:40.120 fused_ordering(293) 00:16:40.120 fused_ordering(294) 00:16:40.120 fused_ordering(295) 00:16:40.120 fused_ordering(296) 00:16:40.120 fused_ordering(297) 00:16:40.120 fused_ordering(298) 00:16:40.120 fused_ordering(299) 00:16:40.120 fused_ordering(300) 00:16:40.120 fused_ordering(301) 00:16:40.120 fused_ordering(302) 00:16:40.120 fused_ordering(303) 00:16:40.120 fused_ordering(304) 00:16:40.120 fused_ordering(305) 00:16:40.120 fused_ordering(306) 00:16:40.120 fused_ordering(307) 00:16:40.120 fused_ordering(308) 00:16:40.120 fused_ordering(309) 00:16:40.120 fused_ordering(310) 00:16:40.120 fused_ordering(311) 00:16:40.120 fused_ordering(312) 00:16:40.120 fused_ordering(313) 00:16:40.120 fused_ordering(314) 00:16:40.120 fused_ordering(315) 00:16:40.120 fused_ordering(316) 00:16:40.120 fused_ordering(317) 00:16:40.120 fused_ordering(318) 00:16:40.120 fused_ordering(319) 00:16:40.120 fused_ordering(320) 00:16:40.120 fused_ordering(321) 00:16:40.120 fused_ordering(322) 00:16:40.120 fused_ordering(323) 00:16:40.120 fused_ordering(324) 00:16:40.120 fused_ordering(325) 00:16:40.120 fused_ordering(326) 00:16:40.120 fused_ordering(327) 00:16:40.120 fused_ordering(328) 00:16:40.120 fused_ordering(329) 00:16:40.120 fused_ordering(330) 00:16:40.120 fused_ordering(331) 00:16:40.120 fused_ordering(332) 00:16:40.120 fused_ordering(333) 00:16:40.120 fused_ordering(334) 00:16:40.120 fused_ordering(335) 00:16:40.120 fused_ordering(336) 00:16:40.120 fused_ordering(337) 00:16:40.120 fused_ordering(338) 00:16:40.120 fused_ordering(339) 00:16:40.120 fused_ordering(340) 00:16:40.120 fused_ordering(341) 00:16:40.120 fused_ordering(342) 00:16:40.120 fused_ordering(343) 00:16:40.120 fused_ordering(344) 00:16:40.120 fused_ordering(345) 00:16:40.120 fused_ordering(346) 00:16:40.120 fused_ordering(347) 00:16:40.120 fused_ordering(348) 00:16:40.120 fused_ordering(349) 00:16:40.120 fused_ordering(350) 00:16:40.120 fused_ordering(351) 00:16:40.120 fused_ordering(352) 00:16:40.120 fused_ordering(353) 00:16:40.120 fused_ordering(354) 00:16:40.120 fused_ordering(355) 00:16:40.120 fused_ordering(356) 00:16:40.120 fused_ordering(357) 00:16:40.120 fused_ordering(358) 00:16:40.120 fused_ordering(359) 00:16:40.120 fused_ordering(360) 00:16:40.120 fused_ordering(361) 00:16:40.120 fused_ordering(362) 00:16:40.120 fused_ordering(363) 00:16:40.120 fused_ordering(364) 00:16:40.120 fused_ordering(365) 00:16:40.120 fused_ordering(366) 00:16:40.120 fused_ordering(367) 00:16:40.120 fused_ordering(368) 00:16:40.120 fused_ordering(369) 00:16:40.120 fused_ordering(370) 00:16:40.120 fused_ordering(371) 00:16:40.120 fused_ordering(372) 00:16:40.120 fused_ordering(373) 00:16:40.120 fused_ordering(374) 00:16:40.120 fused_ordering(375) 00:16:40.120 fused_ordering(376) 00:16:40.120 fused_ordering(377) 00:16:40.120 fused_ordering(378) 00:16:40.120 fused_ordering(379) 00:16:40.120 fused_ordering(380) 00:16:40.120 fused_ordering(381) 00:16:40.120 fused_ordering(382) 00:16:40.120 fused_ordering(383) 00:16:40.120 fused_ordering(384) 00:16:40.120 fused_ordering(385) 00:16:40.120 fused_ordering(386) 00:16:40.120 fused_ordering(387) 00:16:40.120 fused_ordering(388) 00:16:40.120 fused_ordering(389) 00:16:40.120 fused_ordering(390) 00:16:40.120 fused_ordering(391) 00:16:40.120 fused_ordering(392) 00:16:40.120 fused_ordering(393) 00:16:40.120 fused_ordering(394) 00:16:40.120 fused_ordering(395) 00:16:40.120 fused_ordering(396) 00:16:40.120 fused_ordering(397) 00:16:40.120 fused_ordering(398) 00:16:40.120 fused_ordering(399) 00:16:40.120 fused_ordering(400) 00:16:40.120 fused_ordering(401) 00:16:40.120 fused_ordering(402) 00:16:40.120 fused_ordering(403) 00:16:40.120 fused_ordering(404) 00:16:40.120 fused_ordering(405) 00:16:40.120 fused_ordering(406) 00:16:40.120 fused_ordering(407) 00:16:40.120 fused_ordering(408) 00:16:40.120 fused_ordering(409) 00:16:40.120 fused_ordering(410) 00:16:40.120 fused_ordering(411) 00:16:40.120 fused_ordering(412) 00:16:40.120 fused_ordering(413) 00:16:40.120 fused_ordering(414) 00:16:40.120 fused_ordering(415) 00:16:40.120 fused_ordering(416) 00:16:40.120 fused_ordering(417) 00:16:40.120 fused_ordering(418) 00:16:40.120 fused_ordering(419) 00:16:40.120 fused_ordering(420) 00:16:40.120 fused_ordering(421) 00:16:40.120 fused_ordering(422) 00:16:40.120 fused_ordering(423) 00:16:40.120 fused_ordering(424) 00:16:40.120 fused_ordering(425) 00:16:40.120 fused_ordering(426) 00:16:40.120 fused_ordering(427) 00:16:40.120 fused_ordering(428) 00:16:40.120 fused_ordering(429) 00:16:40.120 fused_ordering(430) 00:16:40.120 fused_ordering(431) 00:16:40.120 fused_ordering(432) 00:16:40.120 fused_ordering(433) 00:16:40.120 fused_ordering(434) 00:16:40.120 fused_ordering(435) 00:16:40.120 fused_ordering(436) 00:16:40.120 fused_ordering(437) 00:16:40.120 fused_ordering(438) 00:16:40.120 fused_ordering(439) 00:16:40.120 fused_ordering(440) 00:16:40.120 fused_ordering(441) 00:16:40.120 fused_ordering(442) 00:16:40.120 fused_ordering(443) 00:16:40.120 fused_ordering(444) 00:16:40.120 fused_ordering(445) 00:16:40.120 fused_ordering(446) 00:16:40.120 fused_ordering(447) 00:16:40.120 fused_ordering(448) 00:16:40.120 fused_ordering(449) 00:16:40.120 fused_ordering(450) 00:16:40.120 fused_ordering(451) 00:16:40.120 fused_ordering(452) 00:16:40.120 fused_ordering(453) 00:16:40.120 fused_ordering(454) 00:16:40.120 fused_ordering(455) 00:16:40.120 fused_ordering(456) 00:16:40.120 fused_ordering(457) 00:16:40.120 fused_ordering(458) 00:16:40.120 fused_ordering(459) 00:16:40.120 fused_ordering(460) 00:16:40.120 fused_ordering(461) 00:16:40.120 fused_ordering(462) 00:16:40.120 fused_ordering(463) 00:16:40.120 fused_ordering(464) 00:16:40.120 fused_ordering(465) 00:16:40.120 fused_ordering(466) 00:16:40.120 fused_ordering(467) 00:16:40.120 fused_ordering(468) 00:16:40.120 fused_ordering(469) 00:16:40.120 fused_ordering(470) 00:16:40.120 fused_ordering(471) 00:16:40.120 fused_ordering(472) 00:16:40.120 fused_ordering(473) 00:16:40.120 fused_ordering(474) 00:16:40.120 fused_ordering(475) 00:16:40.120 fused_ordering(476) 00:16:40.120 fused_ordering(477) 00:16:40.120 fused_ordering(478) 00:16:40.120 fused_ordering(479) 00:16:40.120 fused_ordering(480) 00:16:40.120 fused_ordering(481) 00:16:40.120 fused_ordering(482) 00:16:40.120 fused_ordering(483) 00:16:40.120 fused_ordering(484) 00:16:40.120 fused_ordering(485) 00:16:40.120 fused_ordering(486) 00:16:40.120 fused_ordering(487) 00:16:40.120 fused_ordering(488) 00:16:40.120 fused_ordering(489) 00:16:40.120 fused_ordering(490) 00:16:40.120 fused_ordering(491) 00:16:40.120 fused_ordering(492) 00:16:40.120 fused_ordering(493) 00:16:40.120 fused_ordering(494) 00:16:40.120 fused_ordering(495) 00:16:40.120 fused_ordering(496) 00:16:40.120 fused_ordering(497) 00:16:40.120 fused_ordering(498) 00:16:40.120 fused_ordering(499) 00:16:40.120 fused_ordering(500) 00:16:40.120 fused_ordering(501) 00:16:40.120 fused_ordering(502) 00:16:40.120 fused_ordering(503) 00:16:40.120 fused_ordering(504) 00:16:40.120 fused_ordering(505) 00:16:40.120 fused_ordering(506) 00:16:40.120 fused_ordering(507) 00:16:40.120 fused_ordering(508) 00:16:40.120 fused_ordering(509) 00:16:40.120 fused_ordering(510) 00:16:40.120 fused_ordering(511) 00:16:40.120 fused_ordering(512) 00:16:40.120 fused_ordering(513) 00:16:40.120 fused_ordering(514) 00:16:40.120 fused_ordering(515) 00:16:40.120 fused_ordering(516) 00:16:40.120 fused_ordering(517) 00:16:40.120 fused_ordering(518) 00:16:40.120 fused_ordering(519) 00:16:40.120 fused_ordering(520) 00:16:40.120 fused_ordering(521) 00:16:40.120 fused_ordering(522) 00:16:40.120 fused_ordering(523) 00:16:40.120 fused_ordering(524) 00:16:40.120 fused_ordering(525) 00:16:40.120 fused_ordering(526) 00:16:40.120 fused_ordering(527) 00:16:40.120 fused_ordering(528) 00:16:40.120 fused_ordering(529) 00:16:40.120 fused_ordering(530) 00:16:40.120 fused_ordering(531) 00:16:40.120 fused_ordering(532) 00:16:40.120 fused_ordering(533) 00:16:40.120 fused_ordering(534) 00:16:40.120 fused_ordering(535) 00:16:40.120 fused_ordering(536) 00:16:40.120 fused_ordering(537) 00:16:40.120 fused_ordering(538) 00:16:40.120 fused_ordering(539) 00:16:40.120 fused_ordering(540) 00:16:40.120 fused_ordering(541) 00:16:40.120 fused_ordering(542) 00:16:40.120 fused_ordering(543) 00:16:40.120 fused_ordering(544) 00:16:40.120 fused_ordering(545) 00:16:40.120 fused_ordering(546) 00:16:40.120 fused_ordering(547) 00:16:40.120 fused_ordering(548) 00:16:40.120 fused_ordering(549) 00:16:40.120 fused_ordering(550) 00:16:40.120 fused_ordering(551) 00:16:40.120 fused_ordering(552) 00:16:40.120 fused_ordering(553) 00:16:40.120 fused_ordering(554) 00:16:40.120 fused_ordering(555) 00:16:40.120 fused_ordering(556) 00:16:40.120 fused_ordering(557) 00:16:40.120 fused_ordering(558) 00:16:40.120 fused_ordering(559) 00:16:40.120 fused_ordering(560) 00:16:40.120 fused_ordering(561) 00:16:40.120 fused_ordering(562) 00:16:40.120 fused_ordering(563) 00:16:40.120 fused_ordering(564) 00:16:40.120 fused_ordering(565) 00:16:40.120 fused_ordering(566) 00:16:40.120 fused_ordering(567) 00:16:40.120 fused_ordering(568) 00:16:40.120 fused_ordering(569) 00:16:40.121 fused_ordering(570) 00:16:40.121 fused_ordering(571) 00:16:40.121 fused_ordering(572) 00:16:40.121 fused_ordering(573) 00:16:40.121 fused_ordering(574) 00:16:40.121 fused_ordering(575) 00:16:40.121 fused_ordering(576) 00:16:40.121 fused_ordering(577) 00:16:40.121 fused_ordering(578) 00:16:40.121 fused_ordering(579) 00:16:40.121 fused_ordering(580) 00:16:40.121 fused_ordering(581) 00:16:40.121 fused_ordering(582) 00:16:40.121 fused_ordering(583) 00:16:40.121 fused_ordering(584) 00:16:40.121 fused_ordering(585) 00:16:40.121 fused_ordering(586) 00:16:40.121 fused_ordering(587) 00:16:40.121 fused_ordering(588) 00:16:40.121 fused_ordering(589) 00:16:40.121 fused_ordering(590) 00:16:40.121 fused_ordering(591) 00:16:40.121 fused_ordering(592) 00:16:40.121 fused_ordering(593) 00:16:40.121 fused_ordering(594) 00:16:40.121 fused_ordering(595) 00:16:40.121 fused_ordering(596) 00:16:40.121 fused_ordering(597) 00:16:40.121 fused_ordering(598) 00:16:40.121 fused_ordering(599) 00:16:40.121 fused_ordering(600) 00:16:40.121 fused_ordering(601) 00:16:40.121 fused_ordering(602) 00:16:40.121 fused_ordering(603) 00:16:40.121 fused_ordering(604) 00:16:40.121 fused_ordering(605) 00:16:40.121 fused_ordering(606) 00:16:40.121 fused_ordering(607) 00:16:40.121 fused_ordering(608) 00:16:40.121 fused_ordering(609) 00:16:40.121 fused_ordering(610) 00:16:40.121 fused_ordering(611) 00:16:40.121 fused_ordering(612) 00:16:40.121 fused_ordering(613) 00:16:40.121 fused_ordering(614) 00:16:40.121 fused_ordering(615) 00:16:40.380 fused_ordering(616) 00:16:40.380 fused_ordering(617) 00:16:40.380 fused_ordering(618) 00:16:40.380 fused_ordering(619) 00:16:40.380 fused_ordering(620) 00:16:40.380 fused_ordering(621) 00:16:40.380 fused_ordering(622) 00:16:40.380 fused_ordering(623) 00:16:40.380 fused_ordering(624) 00:16:40.380 fused_ordering(625) 00:16:40.380 fused_ordering(626) 00:16:40.380 fused_ordering(627) 00:16:40.380 fused_ordering(628) 00:16:40.380 fused_ordering(629) 00:16:40.380 fused_ordering(630) 00:16:40.380 fused_ordering(631) 00:16:40.380 fused_ordering(632) 00:16:40.380 fused_ordering(633) 00:16:40.380 fused_ordering(634) 00:16:40.380 fused_ordering(635) 00:16:40.380 fused_ordering(636) 00:16:40.380 fused_ordering(637) 00:16:40.380 fused_ordering(638) 00:16:40.380 fused_ordering(639) 00:16:40.380 fused_ordering(640) 00:16:40.380 fused_ordering(641) 00:16:40.380 fused_ordering(642) 00:16:40.380 fused_ordering(643) 00:16:40.380 fused_ordering(644) 00:16:40.380 fused_ordering(645) 00:16:40.380 fused_ordering(646) 00:16:40.380 fused_ordering(647) 00:16:40.380 fused_ordering(648) 00:16:40.380 fused_ordering(649) 00:16:40.380 fused_ordering(650) 00:16:40.380 fused_ordering(651) 00:16:40.380 fused_ordering(652) 00:16:40.380 fused_ordering(653) 00:16:40.380 fused_ordering(654) 00:16:40.380 fused_ordering(655) 00:16:40.380 fused_ordering(656) 00:16:40.380 fused_ordering(657) 00:16:40.380 fused_ordering(658) 00:16:40.380 fused_ordering(659) 00:16:40.380 fused_ordering(660) 00:16:40.380 fused_ordering(661) 00:16:40.380 fused_ordering(662) 00:16:40.380 fused_ordering(663) 00:16:40.380 fused_ordering(664) 00:16:40.380 fused_ordering(665) 00:16:40.380 fused_ordering(666) 00:16:40.380 fused_ordering(667) 00:16:40.380 fused_ordering(668) 00:16:40.380 fused_ordering(669) 00:16:40.380 fused_ordering(670) 00:16:40.380 fused_ordering(671) 00:16:40.380 fused_ordering(672) 00:16:40.380 fused_ordering(673) 00:16:40.380 fused_ordering(674) 00:16:40.380 fused_ordering(675) 00:16:40.380 fused_ordering(676) 00:16:40.380 fused_ordering(677) 00:16:40.380 fused_ordering(678) 00:16:40.380 fused_ordering(679) 00:16:40.380 fused_ordering(680) 00:16:40.380 fused_ordering(681) 00:16:40.380 fused_ordering(682) 00:16:40.380 fused_ordering(683) 00:16:40.380 fused_ordering(684) 00:16:40.380 fused_ordering(685) 00:16:40.380 fused_ordering(686) 00:16:40.380 fused_ordering(687) 00:16:40.380 fused_ordering(688) 00:16:40.380 fused_ordering(689) 00:16:40.380 fused_ordering(690) 00:16:40.380 fused_ordering(691) 00:16:40.380 fused_ordering(692) 00:16:40.380 fused_ordering(693) 00:16:40.380 fused_ordering(694) 00:16:40.380 fused_ordering(695) 00:16:40.380 fused_ordering(696) 00:16:40.380 fused_ordering(697) 00:16:40.380 fused_ordering(698) 00:16:40.380 fused_ordering(699) 00:16:40.380 fused_ordering(700) 00:16:40.380 fused_ordering(701) 00:16:40.380 fused_ordering(702) 00:16:40.380 fused_ordering(703) 00:16:40.380 fused_ordering(704) 00:16:40.380 fused_ordering(705) 00:16:40.380 fused_ordering(706) 00:16:40.380 fused_ordering(707) 00:16:40.380 fused_ordering(708) 00:16:40.380 fused_ordering(709) 00:16:40.380 fused_ordering(710) 00:16:40.380 fused_ordering(711) 00:16:40.380 fused_ordering(712) 00:16:40.380 fused_ordering(713) 00:16:40.380 fused_ordering(714) 00:16:40.380 fused_ordering(715) 00:16:40.380 fused_ordering(716) 00:16:40.380 fused_ordering(717) 00:16:40.380 fused_ordering(718) 00:16:40.380 fused_ordering(719) 00:16:40.380 fused_ordering(720) 00:16:40.380 fused_ordering(721) 00:16:40.380 fused_ordering(722) 00:16:40.380 fused_ordering(723) 00:16:40.380 fused_ordering(724) 00:16:40.380 fused_ordering(725) 00:16:40.380 fused_ordering(726) 00:16:40.380 fused_ordering(727) 00:16:40.380 fused_ordering(728) 00:16:40.380 fused_ordering(729) 00:16:40.380 fused_ordering(730) 00:16:40.380 fused_ordering(731) 00:16:40.380 fused_ordering(732) 00:16:40.380 fused_ordering(733) 00:16:40.380 fused_ordering(734) 00:16:40.380 fused_ordering(735) 00:16:40.380 fused_ordering(736) 00:16:40.380 fused_ordering(737) 00:16:40.380 fused_ordering(738) 00:16:40.380 fused_ordering(739) 00:16:40.380 fused_ordering(740) 00:16:40.380 fused_ordering(741) 00:16:40.380 fused_ordering(742) 00:16:40.380 fused_ordering(743) 00:16:40.380 fused_ordering(744) 00:16:40.380 fused_ordering(745) 00:16:40.380 fused_ordering(746) 00:16:40.380 fused_ordering(747) 00:16:40.380 fused_ordering(748) 00:16:40.380 fused_ordering(749) 00:16:40.380 fused_ordering(750) 00:16:40.380 fused_ordering(751) 00:16:40.380 fused_ordering(752) 00:16:40.380 fused_ordering(753) 00:16:40.380 fused_ordering(754) 00:16:40.380 fused_ordering(755) 00:16:40.380 fused_ordering(756) 00:16:40.380 fused_ordering(757) 00:16:40.380 fused_ordering(758) 00:16:40.380 fused_ordering(759) 00:16:40.380 fused_ordering(760) 00:16:40.380 fused_ordering(761) 00:16:40.380 fused_ordering(762) 00:16:40.380 fused_ordering(763) 00:16:40.380 fused_ordering(764) 00:16:40.380 fused_ordering(765) 00:16:40.380 fused_ordering(766) 00:16:40.380 fused_ordering(767) 00:16:40.380 fused_ordering(768) 00:16:40.380 fused_ordering(769) 00:16:40.380 fused_ordering(770) 00:16:40.380 fused_ordering(771) 00:16:40.380 fused_ordering(772) 00:16:40.380 fused_ordering(773) 00:16:40.380 fused_ordering(774) 00:16:40.380 fused_ordering(775) 00:16:40.380 fused_ordering(776) 00:16:40.380 fused_ordering(777) 00:16:40.380 fused_ordering(778) 00:16:40.380 fused_ordering(779) 00:16:40.380 fused_ordering(780) 00:16:40.380 fused_ordering(781) 00:16:40.380 fused_ordering(782) 00:16:40.380 fused_ordering(783) 00:16:40.380 fused_ordering(784) 00:16:40.380 fused_ordering(785) 00:16:40.380 fused_ordering(786) 00:16:40.380 fused_ordering(787) 00:16:40.380 fused_ordering(788) 00:16:40.380 fused_ordering(789) 00:16:40.380 fused_ordering(790) 00:16:40.380 fused_ordering(791) 00:16:40.380 fused_ordering(792) 00:16:40.380 fused_ordering(793) 00:16:40.380 fused_ordering(794) 00:16:40.380 fused_ordering(795) 00:16:40.380 fused_ordering(796) 00:16:40.380 fused_ordering(797) 00:16:40.380 fused_ordering(798) 00:16:40.380 fused_ordering(799) 00:16:40.380 fused_ordering(800) 00:16:40.381 fused_ordering(801) 00:16:40.381 fused_ordering(802) 00:16:40.381 fused_ordering(803) 00:16:40.381 fused_ordering(804) 00:16:40.381 fused_ordering(805) 00:16:40.381 fused_ordering(806) 00:16:40.381 fused_ordering(807) 00:16:40.381 fused_ordering(808) 00:16:40.381 fused_ordering(809) 00:16:40.381 fused_ordering(810) 00:16:40.381 fused_ordering(811) 00:16:40.381 fused_ordering(812) 00:16:40.381 fused_ordering(813) 00:16:40.381 fused_ordering(814) 00:16:40.381 fused_ordering(815) 00:16:40.381 fused_ordering(816) 00:16:40.381 fused_ordering(817) 00:16:40.381 fused_ordering(818) 00:16:40.381 fused_ordering(819) 00:16:40.381 fused_ordering(820) 00:16:40.640 fused_ordering(821) 00:16:40.640 fused_ordering(822) 00:16:40.640 fused_ordering(823) 00:16:40.640 fused_ordering(824) 00:16:40.640 fused_ordering(825) 00:16:40.640 fused_ordering(826) 00:16:40.640 fused_ordering(827) 00:16:40.640 fused_ordering(828) 00:16:40.640 fused_ordering(829) 00:16:40.640 fused_ordering(830) 00:16:40.640 fused_ordering(831) 00:16:40.640 fused_ordering(832) 00:16:40.640 fused_ordering(833) 00:16:40.640 fused_ordering(834) 00:16:40.640 fused_ordering(835) 00:16:40.640 fused_ordering(836) 00:16:40.640 fused_ordering(837) 00:16:40.640 fused_ordering(838) 00:16:40.640 fused_ordering(839) 00:16:40.640 fused_ordering(840) 00:16:40.640 fused_ordering(841) 00:16:40.640 fused_ordering(842) 00:16:40.640 fused_ordering(843) 00:16:40.640 fused_ordering(844) 00:16:40.640 fused_ordering(845) 00:16:40.640 fused_ordering(846) 00:16:40.640 fused_ordering(847) 00:16:40.640 fused_ordering(848) 00:16:40.640 fused_ordering(849) 00:16:40.640 fused_ordering(850) 00:16:40.640 fused_ordering(851) 00:16:40.640 fused_ordering(852) 00:16:40.640 fused_ordering(853) 00:16:40.640 fused_ordering(854) 00:16:40.640 fused_ordering(855) 00:16:40.640 fused_ordering(856) 00:16:40.640 fused_ordering(857) 00:16:40.640 fused_ordering(858) 00:16:40.640 fused_ordering(859) 00:16:40.640 fused_ordering(860) 00:16:40.640 fused_ordering(861) 00:16:40.640 fused_ordering(862) 00:16:40.640 fused_ordering(863) 00:16:40.640 fused_ordering(864) 00:16:40.640 fused_ordering(865) 00:16:40.640 fused_ordering(866) 00:16:40.640 fused_ordering(867) 00:16:40.640 fused_ordering(868) 00:16:40.640 fused_ordering(869) 00:16:40.640 fused_ordering(870) 00:16:40.640 fused_ordering(871) 00:16:40.640 fused_ordering(872) 00:16:40.640 fused_ordering(873) 00:16:40.640 fused_ordering(874) 00:16:40.640 fused_ordering(875) 00:16:40.640 fused_ordering(876) 00:16:40.640 fused_ordering(877) 00:16:40.640 fused_ordering(878) 00:16:40.640 fused_ordering(879) 00:16:40.640 fused_ordering(880) 00:16:40.640 fused_ordering(881) 00:16:40.640 fused_ordering(882) 00:16:40.640 fused_ordering(883) 00:16:40.640 fused_ordering(884) 00:16:40.640 fused_ordering(885) 00:16:40.640 fused_ordering(886) 00:16:40.640 fused_ordering(887) 00:16:40.640 fused_ordering(888) 00:16:40.640 fused_ordering(889) 00:16:40.640 fused_ordering(890) 00:16:40.640 fused_ordering(891) 00:16:40.640 fused_ordering(892) 00:16:40.640 fused_ordering(893) 00:16:40.640 fused_ordering(894) 00:16:40.640 fused_ordering(895) 00:16:40.640 fused_ordering(896) 00:16:40.640 fused_ordering(897) 00:16:40.640 fused_ordering(898) 00:16:40.640 fused_ordering(899) 00:16:40.640 fused_ordering(900) 00:16:40.640 fused_ordering(901) 00:16:40.640 fused_ordering(902) 00:16:40.640 fused_ordering(903) 00:16:40.640 fused_ordering(904) 00:16:40.640 fused_ordering(905) 00:16:40.640 fused_ordering(906) 00:16:40.640 fused_ordering(907) 00:16:40.640 fused_ordering(908) 00:16:40.640 fused_ordering(909) 00:16:40.640 fused_ordering(910) 00:16:40.640 fused_ordering(911) 00:16:40.640 fused_ordering(912) 00:16:40.640 fused_ordering(913) 00:16:40.640 fused_ordering(914) 00:16:40.640 fused_ordering(915) 00:16:40.640 fused_ordering(916) 00:16:40.640 fused_ordering(917) 00:16:40.640 fused_ordering(918) 00:16:40.640 fused_ordering(919) 00:16:40.640 fused_ordering(920) 00:16:40.640 fused_ordering(921) 00:16:40.640 fused_ordering(922) 00:16:40.640 fused_ordering(923) 00:16:40.640 fused_ordering(924) 00:16:40.640 fused_ordering(925) 00:16:40.640 fused_ordering(926) 00:16:40.640 fused_ordering(927) 00:16:40.640 fused_ordering(928) 00:16:40.640 fused_ordering(929) 00:16:40.640 fused_ordering(930) 00:16:40.640 fused_ordering(931) 00:16:40.640 fused_ordering(932) 00:16:40.640 fused_ordering(933) 00:16:40.640 fused_ordering(934) 00:16:40.640 fused_ordering(935) 00:16:40.640 fused_ordering(936) 00:16:40.640 fused_ordering(937) 00:16:40.640 fused_ordering(938) 00:16:40.640 fused_ordering(939) 00:16:40.640 fused_ordering(940) 00:16:40.640 fused_ordering(941) 00:16:40.640 fused_ordering(942) 00:16:40.640 fused_ordering(943) 00:16:40.640 fused_ordering(944) 00:16:40.640 fused_ordering(945) 00:16:40.640 fused_ordering(946) 00:16:40.640 fused_ordering(947) 00:16:40.640 fused_ordering(948) 00:16:40.640 fused_ordering(949) 00:16:40.640 fused_ordering(950) 00:16:40.640 fused_ordering(951) 00:16:40.640 fused_ordering(952) 00:16:40.640 fused_ordering(953) 00:16:40.640 fused_ordering(954) 00:16:40.640 fused_ordering(955) 00:16:40.640 fused_ordering(956) 00:16:40.640 fused_ordering(957) 00:16:40.640 fused_ordering(958) 00:16:40.640 fused_ordering(959) 00:16:40.640 fused_ordering(960) 00:16:40.640 fused_ordering(961) 00:16:40.640 fused_ordering(962) 00:16:40.640 fused_ordering(963) 00:16:40.640 fused_ordering(964) 00:16:40.640 fused_ordering(965) 00:16:40.640 fused_ordering(966) 00:16:40.640 fused_ordering(967) 00:16:40.640 fused_ordering(968) 00:16:40.640 fused_ordering(969) 00:16:40.640 fused_ordering(970) 00:16:40.640 fused_ordering(971) 00:16:40.640 fused_ordering(972) 00:16:40.640 fused_ordering(973) 00:16:40.640 fused_ordering(974) 00:16:40.640 fused_ordering(975) 00:16:40.640 fused_ordering(976) 00:16:40.640 fused_ordering(977) 00:16:40.640 fused_ordering(978) 00:16:40.640 fused_ordering(979) 00:16:40.640 fused_ordering(980) 00:16:40.640 fused_ordering(981) 00:16:40.640 fused_ordering(982) 00:16:40.640 fused_ordering(983) 00:16:40.640 fused_ordering(984) 00:16:40.640 fused_ordering(985) 00:16:40.640 fused_ordering(986) 00:16:40.640 fused_ordering(987) 00:16:40.640 fused_ordering(988) 00:16:40.640 fused_ordering(989) 00:16:40.640 fused_ordering(990) 00:16:40.640 fused_ordering(991) 00:16:40.640 fused_ordering(992) 00:16:40.640 fused_ordering(993) 00:16:40.640 fused_ordering(994) 00:16:40.640 fused_ordering(995) 00:16:40.640 fused_ordering(996) 00:16:40.640 fused_ordering(997) 00:16:40.640 fused_ordering(998) 00:16:40.640 fused_ordering(999) 00:16:40.640 fused_ordering(1000) 00:16:40.640 fused_ordering(1001) 00:16:40.640 fused_ordering(1002) 00:16:40.640 fused_ordering(1003) 00:16:40.640 fused_ordering(1004) 00:16:40.640 fused_ordering(1005) 00:16:40.640 fused_ordering(1006) 00:16:40.640 fused_ordering(1007) 00:16:40.640 fused_ordering(1008) 00:16:40.640 fused_ordering(1009) 00:16:40.640 fused_ordering(1010) 00:16:40.640 fused_ordering(1011) 00:16:40.640 fused_ordering(1012) 00:16:40.640 fused_ordering(1013) 00:16:40.640 fused_ordering(1014) 00:16:40.640 fused_ordering(1015) 00:16:40.640 fused_ordering(1016) 00:16:40.640 fused_ordering(1017) 00:16:40.640 fused_ordering(1018) 00:16:40.640 fused_ordering(1019) 00:16:40.640 fused_ordering(1020) 00:16:40.640 fused_ordering(1021) 00:16:40.640 fused_ordering(1022) 00:16:40.640 fused_ordering(1023) 00:16:40.640 11:44:11 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:40.640 11:44:11 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:40.640 11:44:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.640 11:44:11 -- nvmf/common.sh@116 -- # sync 00:16:40.640 11:44:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:40.640 11:44:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:40.640 11:44:11 -- nvmf/common.sh@119 -- # set +e 00:16:40.640 11:44:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.640 11:44:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:40.640 rmmod nvme_rdma 00:16:40.640 rmmod nvme_fabrics 00:16:40.640 11:44:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.640 11:44:11 -- nvmf/common.sh@123 -- # set -e 00:16:40.641 11:44:11 -- nvmf/common.sh@124 -- # return 0 00:16:40.641 11:44:11 -- nvmf/common.sh@477 -- # '[' -n 3719144 ']' 00:16:40.641 11:44:11 -- nvmf/common.sh@478 -- # killprocess 3719144 00:16:40.641 11:44:11 -- common/autotest_common.sh@936 -- # '[' -z 3719144 ']' 00:16:40.641 11:44:11 -- common/autotest_common.sh@940 -- # kill -0 3719144 00:16:40.641 11:44:11 -- common/autotest_common.sh@941 -- # uname 00:16:40.641 11:44:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.641 11:44:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3719144 00:16:40.641 11:44:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.641 11:44:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.641 11:44:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3719144' 00:16:40.641 killing process with pid 3719144 00:16:40.641 11:44:11 -- common/autotest_common.sh@955 -- # kill 3719144 00:16:40.641 11:44:11 -- common/autotest_common.sh@960 -- # wait 3719144 00:16:40.899 11:44:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.899 11:44:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:40.899 00:16:40.899 real 0m8.924s 00:16:40.899 user 0m4.839s 00:16:40.899 sys 0m5.462s 00:16:40.899 11:44:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.899 11:44:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 ************************************ 00:16:40.899 END TEST nvmf_fused_ordering 00:16:40.899 ************************************ 00:16:40.899 11:44:11 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:40.899 11:44:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.899 11:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.899 11:44:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.899 ************************************ 00:16:40.899 START TEST nvmf_delete_subsystem 00:16:40.899 ************************************ 00:16:40.899 11:44:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:40.899 * Looking for test storage... 00:16:40.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.899 11:44:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.158 11:44:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.158 11:44:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.158 11:44:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.158 11:44:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.158 11:44:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.158 11:44:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.158 11:44:11 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.158 11:44:11 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.158 11:44:11 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.158 11:44:11 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.158 11:44:11 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.158 11:44:11 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.158 11:44:11 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.158 11:44:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.158 11:44:11 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.158 11:44:11 -- scripts/common.sh@344 -- # : 1 00:16:41.158 11:44:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.158 11:44:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.158 11:44:11 -- scripts/common.sh@364 -- # decimal 1 00:16:41.158 11:44:11 -- scripts/common.sh@352 -- # local d=1 00:16:41.158 11:44:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.158 11:44:11 -- scripts/common.sh@354 -- # echo 1 00:16:41.158 11:44:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.158 11:44:11 -- scripts/common.sh@365 -- # decimal 2 00:16:41.158 11:44:11 -- scripts/common.sh@352 -- # local d=2 00:16:41.158 11:44:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.158 11:44:11 -- scripts/common.sh@354 -- # echo 2 00:16:41.158 11:44:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.158 11:44:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.158 11:44:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.158 11:44:11 -- scripts/common.sh@367 -- # return 0 00:16:41.158 11:44:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.158 11:44:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.158 --rc genhtml_branch_coverage=1 00:16:41.158 --rc genhtml_function_coverage=1 00:16:41.158 --rc genhtml_legend=1 00:16:41.158 --rc geninfo_all_blocks=1 00:16:41.158 --rc geninfo_unexecuted_blocks=1 00:16:41.158 00:16:41.158 ' 00:16:41.158 11:44:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.158 --rc genhtml_branch_coverage=1 00:16:41.158 --rc genhtml_function_coverage=1 00:16:41.158 --rc genhtml_legend=1 00:16:41.158 --rc geninfo_all_blocks=1 00:16:41.158 --rc geninfo_unexecuted_blocks=1 00:16:41.158 00:16:41.159 ' 00:16:41.159 11:44:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.159 --rc genhtml_branch_coverage=1 00:16:41.159 --rc genhtml_function_coverage=1 00:16:41.159 --rc genhtml_legend=1 00:16:41.159 --rc geninfo_all_blocks=1 00:16:41.159 --rc geninfo_unexecuted_blocks=1 00:16:41.159 00:16:41.159 ' 00:16:41.159 11:44:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.159 --rc genhtml_branch_coverage=1 00:16:41.159 --rc genhtml_function_coverage=1 00:16:41.159 --rc genhtml_legend=1 00:16:41.159 --rc geninfo_all_blocks=1 00:16:41.159 --rc geninfo_unexecuted_blocks=1 00:16:41.159 00:16:41.159 ' 00:16:41.159 11:44:11 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.159 11:44:11 -- nvmf/common.sh@7 -- # uname -s 00:16:41.159 11:44:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.159 11:44:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.159 11:44:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.159 11:44:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.159 11:44:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.159 11:44:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.159 11:44:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.159 11:44:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.159 11:44:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.159 11:44:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.159 11:44:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:41.159 11:44:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:41.159 11:44:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.159 11:44:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.159 11:44:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.159 11:44:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:41.159 11:44:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.159 11:44:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.159 11:44:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.159 11:44:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.159 11:44:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.159 11:44:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.159 11:44:11 -- paths/export.sh@5 -- # export PATH 00:16:41.159 11:44:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.159 11:44:11 -- nvmf/common.sh@46 -- # : 0 00:16:41.159 11:44:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.159 11:44:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.159 11:44:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.159 11:44:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.159 11:44:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.159 11:44:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.159 11:44:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.159 11:44:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.159 11:44:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:41.159 11:44:11 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:41.159 11:44:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.159 11:44:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:41.159 11:44:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:41.159 11:44:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:41.159 11:44:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.159 11:44:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.159 11:44:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.159 11:44:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:41.159 11:44:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:41.159 11:44:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:41.159 11:44:11 -- common/autotest_common.sh@10 -- # set +x 00:16:47.728 11:44:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:47.728 11:44:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:47.728 11:44:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:47.728 11:44:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:47.728 11:44:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:47.728 11:44:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:47.728 11:44:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:47.728 11:44:18 -- nvmf/common.sh@294 -- # net_devs=() 00:16:47.728 11:44:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:47.728 11:44:18 -- nvmf/common.sh@295 -- # e810=() 00:16:47.728 11:44:18 -- nvmf/common.sh@295 -- # local -ga e810 00:16:47.728 11:44:18 -- nvmf/common.sh@296 -- # x722=() 00:16:47.728 11:44:18 -- nvmf/common.sh@296 -- # local -ga x722 00:16:47.728 11:44:18 -- nvmf/common.sh@297 -- # mlx=() 00:16:47.728 11:44:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:47.728 11:44:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.728 11:44:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.729 11:44:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.729 11:44:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:47.729 11:44:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.729 11:44:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:47.729 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:47.729 11:44:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.729 11:44:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.729 11:44:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:47.729 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:47.729 11:44:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.729 11:44:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:47.729 11:44:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.729 11:44:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.729 11:44:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.729 11:44:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.729 11:44:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:47.729 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:47.729 11:44:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.729 11:44:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.729 11:44:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.729 11:44:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.729 11:44:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:47.729 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:47.729 11:44:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.729 11:44:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:47.729 11:44:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:47.729 11:44:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:47.729 11:44:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:47.729 11:44:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:47.729 11:44:18 -- nvmf/common.sh@57 -- # uname 00:16:47.729 11:44:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:47.729 11:44:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:47.729 11:44:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:47.729 11:44:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:47.729 11:44:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:47.729 11:44:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:47.729 11:44:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:47.729 11:44:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:47.729 11:44:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:47.729 11:44:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:47.729 11:44:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:47.729 11:44:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.729 11:44:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:47.729 11:44:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:47.729 11:44:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.989 11:44:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:47.989 11:44:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@104 -- # continue 2 00:16:47.989 11:44:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@104 -- # continue 2 00:16:47.989 11:44:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:47.989 11:44:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.989 11:44:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:47.989 11:44:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:47.989 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.989 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:47.989 altname enp217s0f0np0 00:16:47.989 altname ens818f0np0 00:16:47.989 inet 192.168.100.8/24 scope global mlx_0_0 00:16:47.989 valid_lft forever preferred_lft forever 00:16:47.989 11:44:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:47.989 11:44:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.989 11:44:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:47.989 11:44:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:47.989 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.989 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:47.989 altname enp217s0f1np1 00:16:47.989 altname ens818f1np1 00:16:47.989 inet 192.168.100.9/24 scope global mlx_0_1 00:16:47.989 valid_lft forever preferred_lft forever 00:16:47.989 11:44:18 -- nvmf/common.sh@410 -- # return 0 00:16:47.989 11:44:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:47.989 11:44:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:47.989 11:44:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:47.989 11:44:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:47.989 11:44:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.989 11:44:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:47.989 11:44:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:47.989 11:44:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.989 11:44:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:47.989 11:44:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@104 -- # continue 2 00:16:47.989 11:44:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.989 11:44:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.989 11:44:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@104 -- # continue 2 00:16:47.989 11:44:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:47.989 11:44:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.989 11:44:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:47.989 11:44:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:47.989 11:44:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:47.989 11:44:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:47.989 192.168.100.9' 00:16:47.989 11:44:18 -- nvmf/common.sh@445 -- # head -n 1 00:16:47.989 11:44:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:47.989 192.168.100.9' 00:16:47.989 11:44:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:47.989 11:44:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:47.989 192.168.100.9' 00:16:47.989 11:44:18 -- nvmf/common.sh@446 -- # tail -n +2 00:16:47.989 11:44:18 -- nvmf/common.sh@446 -- # head -n 1 00:16:47.989 11:44:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:47.989 11:44:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:47.989 11:44:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:47.989 11:44:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:47.989 11:44:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:47.989 11:44:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:47.989 11:44:18 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:47.989 11:44:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:47.989 11:44:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.989 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 11:44:18 -- nvmf/common.sh@469 -- # nvmfpid=3722880 00:16:47.989 11:44:18 -- nvmf/common.sh@470 -- # waitforlisten 3722880 00:16:47.989 11:44:18 -- common/autotest_common.sh@829 -- # '[' -z 3722880 ']' 00:16:47.989 11:44:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.989 11:44:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.989 11:44:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.989 11:44:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.989 11:44:18 -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 11:44:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:47.989 [2024-12-03 11:44:18.549296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:47.989 [2024-12-03 11:44:18.549351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.989 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.249 [2024-12-03 11:44:18.620212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.249 [2024-12-03 11:44:18.693594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.249 [2024-12-03 11:44:18.693702] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.249 [2024-12-03 11:44:18.693712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.249 [2024-12-03 11:44:18.693721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.249 [2024-12-03 11:44:18.693768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.249 [2024-12-03 11:44:18.693771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.817 11:44:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.817 11:44:19 -- common/autotest_common.sh@862 -- # return 0 00:16:48.817 11:44:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.817 11:44:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.817 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:48.817 11:44:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.817 11:44:19 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:48.817 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.817 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 [2024-12-03 11:44:19.431115] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe24a60/0xe28f50) succeed. 00:16:49.076 [2024-12-03 11:44:19.440140] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe25f60/0xe6a5f0) succeed. 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:49.076 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:49.076 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 [2024-12-03 11:44:19.530402] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:49.076 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 NULL1 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:49.076 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 Delay0 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.076 11:44:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.076 11:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 11:44:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@28 -- # perf_pid=3723118 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:49.076 11:44:19 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:49.076 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.076 [2024-12-03 11:44:19.634595] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:50.980 11:44:21 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.980 11:44:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.980 11:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 NVMe io qpair process completion error 00:16:52.357 11:44:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.357 11:44:22 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:52.357 11:44:22 -- target/delete_subsystem.sh@35 -- # kill -0 3723118 00:16:52.357 11:44:22 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:52.633 11:44:23 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:52.633 11:44:23 -- target/delete_subsystem.sh@35 -- # kill -0 3723118 00:16:52.633 11:44:23 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Read completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.297 Write completed with error (sct=0, sc=8) 00:16:53.297 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 starting I/O failed: -6 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Read completed with error (sct=0, sc=8) 00:16:53.298 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Write completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 Read completed with error (sct=0, sc=8) 00:16:53.299 11:44:23 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:53.299 11:44:23 -- target/delete_subsystem.sh@35 -- # kill -0 3723118 00:16:53.299 11:44:23 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:53.299 [2024-12-03 11:44:23.730369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:53.299 [2024-12-03 11:44:23.730416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.299 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:53.299 Initializing NVMe Controllers 00:16:53.299 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.299 Controller IO queue size 128, less than required. 00:16:53.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:53.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:53.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:53.299 Initialization complete. Launching workers. 00:16:53.299 ======================================================== 00:16:53.299 Latency(us) 00:16:53.299 Device Information : IOPS MiB/s Average min max 00:16:53.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1593887.73 1000090.68 2976804.42 00:16:53.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595147.57 1000538.07 2977035.79 00:16:53.299 ======================================================== 00:16:53.299 Total : 160.93 0.08 1594517.65 1000090.68 2977035.79 00:16:53.299 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@35 -- # kill -0 3723118 00:16:53.866 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3723118) - No such process 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@45 -- # NOT wait 3723118 00:16:53.866 11:44:24 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.866 11:44:24 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3723118 00:16:53.866 11:44:24 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:53.866 11:44:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.866 11:44:24 -- common/autotest_common.sh@642 -- # type -t wait 00:16:53.866 11:44:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.866 11:44:24 -- common/autotest_common.sh@653 -- # wait 3723118 00:16:53.866 11:44:24 -- common/autotest_common.sh@653 -- # es=1 00:16:53.866 11:44:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.866 11:44:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.866 11:44:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.866 11:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.866 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 11:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:53.866 11:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.866 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 [2024-12-03 11:44:24.247746] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:53.866 11:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:53.866 11:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.866 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 11:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@54 -- # perf_pid=3723925 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:53.866 11:44:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.866 [2024-12-03 11:44:24.337706] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:54.433 11:44:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.433 11:44:24 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:54.433 11:44:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.691 11:44:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.692 11:44:25 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:54.692 11:44:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.257 11:44:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.257 11:44:25 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:55.257 11:44:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:55.823 11:44:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.823 11:44:26 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:55.823 11:44:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.391 11:44:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.391 11:44:26 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:56.391 11:44:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:56.959 11:44:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:56.959 11:44:27 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:56.959 11:44:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:57.218 11:44:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:57.218 11:44:27 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:57.218 11:44:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:57.786 11:44:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:57.786 11:44:28 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:57.786 11:44:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:58.354 11:44:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:58.354 11:44:28 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:58.354 11:44:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:58.921 11:44:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:58.921 11:44:29 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:58.921 11:44:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:59.488 11:44:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:59.488 11:44:29 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:59.488 11:44:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:59.747 11:44:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:59.747 11:44:30 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:16:59.747 11:44:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.316 11:44:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:00.316 11:44:30 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:17:00.316 11:44:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.884 11:44:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:00.884 11:44:31 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:17:00.884 11:44:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:00.884 Initializing NVMe Controllers 00:17:00.884 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.884 Controller IO queue size 128, less than required. 00:17:00.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:00.884 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:00.884 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:00.884 Initialization complete. Launching workers. 00:17:00.884 ======================================================== 00:17:00.884 Latency(us) 00:17:00.884 Device Information : IOPS MiB/s Average min max 00:17:00.884 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001256.25 1000058.60 1003825.56 00:17:00.884 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002515.40 1000107.61 1005917.12 00:17:00.884 ======================================================== 00:17:00.884 Total : 256.00 0.12 1001885.82 1000058.60 1005917.12 00:17:00.884 00:17:01.453 11:44:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:01.453 11:44:31 -- target/delete_subsystem.sh@57 -- # kill -0 3723925 00:17:01.453 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3723925) - No such process 00:17:01.453 11:44:31 -- target/delete_subsystem.sh@67 -- # wait 3723925 00:17:01.453 11:44:31 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:01.453 11:44:31 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:01.453 11:44:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:01.453 11:44:31 -- nvmf/common.sh@116 -- # sync 00:17:01.453 11:44:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:01.453 11:44:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:01.453 11:44:31 -- nvmf/common.sh@119 -- # set +e 00:17:01.453 11:44:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:01.453 11:44:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:01.453 rmmod nvme_rdma 00:17:01.453 rmmod nvme_fabrics 00:17:01.453 11:44:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:01.453 11:44:31 -- nvmf/common.sh@123 -- # set -e 00:17:01.453 11:44:31 -- nvmf/common.sh@124 -- # return 0 00:17:01.453 11:44:31 -- nvmf/common.sh@477 -- # '[' -n 3722880 ']' 00:17:01.453 11:44:31 -- nvmf/common.sh@478 -- # killprocess 3722880 00:17:01.453 11:44:31 -- common/autotest_common.sh@936 -- # '[' -z 3722880 ']' 00:17:01.453 11:44:31 -- common/autotest_common.sh@940 -- # kill -0 3722880 00:17:01.453 11:44:31 -- common/autotest_common.sh@941 -- # uname 00:17:01.453 11:44:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.453 11:44:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3722880 00:17:01.453 11:44:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.453 11:44:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.453 11:44:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3722880' 00:17:01.453 killing process with pid 3722880 00:17:01.453 11:44:31 -- common/autotest_common.sh@955 -- # kill 3722880 00:17:01.453 11:44:31 -- common/autotest_common.sh@960 -- # wait 3722880 00:17:01.712 11:44:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:01.713 11:44:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:01.713 00:17:01.713 real 0m20.808s 00:17:01.713 user 0m50.223s 00:17:01.713 sys 0m6.525s 00:17:01.713 11:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.713 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:01.713 ************************************ 00:17:01.713 END TEST nvmf_delete_subsystem 00:17:01.713 ************************************ 00:17:01.713 11:44:32 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:01.713 11:44:32 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:01.713 11:44:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.713 11:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.713 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:01.713 ************************************ 00:17:01.713 START TEST nvmf_nvme_cli 00:17:01.713 ************************************ 00:17:01.713 11:44:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:01.971 * Looking for test storage... 00:17:01.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:01.971 11:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:01.971 11:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:01.971 11:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:01.971 11:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:01.971 11:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:01.971 11:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:01.971 11:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:01.971 11:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:17:01.971 11:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:17:01.971 11:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.971 11:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:17:01.971 11:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:17:01.971 11:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:17:01.971 11:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:17:01.971 11:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:01.971 11:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:17:01.971 11:44:32 -- scripts/common.sh@344 -- # : 1 00:17:01.971 11:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:01.971 11:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.971 11:44:32 -- scripts/common.sh@364 -- # decimal 1 00:17:01.971 11:44:32 -- scripts/common.sh@352 -- # local d=1 00:17:01.971 11:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.971 11:44:32 -- scripts/common.sh@354 -- # echo 1 00:17:01.971 11:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:01.971 11:44:32 -- scripts/common.sh@365 -- # decimal 2 00:17:01.971 11:44:32 -- scripts/common.sh@352 -- # local d=2 00:17:01.971 11:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.971 11:44:32 -- scripts/common.sh@354 -- # echo 2 00:17:01.971 11:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:01.971 11:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:01.971 11:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:01.971 11:44:32 -- scripts/common.sh@367 -- # return 0 00:17:01.971 11:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.971 11:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:01.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.971 --rc genhtml_branch_coverage=1 00:17:01.971 --rc genhtml_function_coverage=1 00:17:01.971 --rc genhtml_legend=1 00:17:01.971 --rc geninfo_all_blocks=1 00:17:01.971 --rc geninfo_unexecuted_blocks=1 00:17:01.971 00:17:01.971 ' 00:17:01.971 11:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:01.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.971 --rc genhtml_branch_coverage=1 00:17:01.971 --rc genhtml_function_coverage=1 00:17:01.971 --rc genhtml_legend=1 00:17:01.971 --rc geninfo_all_blocks=1 00:17:01.971 --rc geninfo_unexecuted_blocks=1 00:17:01.971 00:17:01.971 ' 00:17:01.971 11:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:01.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.971 --rc genhtml_branch_coverage=1 00:17:01.972 --rc genhtml_function_coverage=1 00:17:01.972 --rc genhtml_legend=1 00:17:01.972 --rc geninfo_all_blocks=1 00:17:01.972 --rc geninfo_unexecuted_blocks=1 00:17:01.972 00:17:01.972 ' 00:17:01.972 11:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:01.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.972 --rc genhtml_branch_coverage=1 00:17:01.972 --rc genhtml_function_coverage=1 00:17:01.972 --rc genhtml_legend=1 00:17:01.972 --rc geninfo_all_blocks=1 00:17:01.972 --rc geninfo_unexecuted_blocks=1 00:17:01.972 00:17:01.972 ' 00:17:01.972 11:44:32 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.972 11:44:32 -- nvmf/common.sh@7 -- # uname -s 00:17:01.972 11:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.972 11:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.972 11:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.972 11:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.972 11:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.972 11:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.972 11:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.972 11:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.972 11:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.972 11:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.972 11:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:01.972 11:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:01.972 11:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.972 11:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.972 11:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.972 11:44:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:01.972 11:44:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.972 11:44:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.972 11:44:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.972 11:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.972 11:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.972 11:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.972 11:44:32 -- paths/export.sh@5 -- # export PATH 00:17:01.972 11:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.972 11:44:32 -- nvmf/common.sh@46 -- # : 0 00:17:01.972 11:44:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.972 11:44:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.972 11:44:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.972 11:44:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.972 11:44:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.972 11:44:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.972 11:44:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.972 11:44:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.972 11:44:32 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.972 11:44:32 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.972 11:44:32 -- target/nvme_cli.sh@14 -- # devs=() 00:17:01.972 11:44:32 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:01.972 11:44:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:01.972 11:44:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.972 11:44:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.972 11:44:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.972 11:44:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.972 11:44:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.972 11:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.972 11:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.972 11:44:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:01.972 11:44:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:01.972 11:44:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:01.972 11:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:08.536 11:44:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.536 11:44:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.536 11:44:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.536 11:44:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.536 11:44:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.536 11:44:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.536 11:44:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.536 11:44:39 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.536 11:44:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.536 11:44:39 -- nvmf/common.sh@295 -- # e810=() 00:17:08.537 11:44:39 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.537 11:44:39 -- nvmf/common.sh@296 -- # x722=() 00:17:08.537 11:44:39 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.537 11:44:39 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.537 11:44:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.537 11:44:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.537 11:44:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:08.537 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:08.537 11:44:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:08.537 11:44:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:08.537 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:08.537 11:44:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:08.537 11:44:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.537 11:44:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.537 11:44:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:08.537 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:08.537 11:44:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.537 11:44:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.537 11:44:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:08.537 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:08.537 11:44:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.537 11:44:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.537 11:44:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:08.537 11:44:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:08.537 11:44:39 -- nvmf/common.sh@57 -- # uname 00:17:08.537 11:44:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:08.537 11:44:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:08.537 11:44:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:08.537 11:44:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:08.537 11:44:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:08.537 11:44:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:08.537 11:44:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:08.537 11:44:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:08.537 11:44:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:08.537 11:44:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:08.537 11:44:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:08.537 11:44:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:08.537 11:44:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:08.537 11:44:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:08.537 11:44:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:08.537 11:44:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:08.537 11:44:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:08.537 11:44:39 -- nvmf/common.sh@104 -- # continue 2 00:17:08.537 11:44:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.537 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:08.537 11:44:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:08.537 11:44:39 -- nvmf/common.sh@104 -- # continue 2 00:17:08.537 11:44:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:08.537 11:44:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:08.537 11:44:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:08.796 11:44:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:08.797 11:44:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:08.797 11:44:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:08.797 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:08.797 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:08.797 altname enp217s0f0np0 00:17:08.797 altname ens818f0np0 00:17:08.797 inet 192.168.100.8/24 scope global mlx_0_0 00:17:08.797 valid_lft forever preferred_lft forever 00:17:08.797 11:44:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:08.797 11:44:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:08.797 11:44:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:08.797 11:44:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:08.797 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:08.797 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:08.797 altname enp217s0f1np1 00:17:08.797 altname ens818f1np1 00:17:08.797 inet 192.168.100.9/24 scope global mlx_0_1 00:17:08.797 valid_lft forever preferred_lft forever 00:17:08.797 11:44:39 -- nvmf/common.sh@410 -- # return 0 00:17:08.797 11:44:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.797 11:44:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:08.797 11:44:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:08.797 11:44:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:08.797 11:44:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:08.797 11:44:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:08.797 11:44:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:08.797 11:44:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:08.797 11:44:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:08.797 11:44:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:08.797 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.797 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:08.797 11:44:39 -- nvmf/common.sh@104 -- # continue 2 00:17:08.797 11:44:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:08.797 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.797 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.797 11:44:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:08.797 11:44:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@104 -- # continue 2 00:17:08.797 11:44:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:08.797 11:44:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:08.797 11:44:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:08.797 11:44:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:08.797 11:44:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:08.797 11:44:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:08.797 11:44:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:08.797 192.168.100.9' 00:17:08.797 11:44:39 -- nvmf/common.sh@445 -- # head -n 1 00:17:08.797 11:44:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:08.797 192.168.100.9' 00:17:08.797 11:44:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:08.797 11:44:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:08.797 192.168.100.9' 00:17:08.797 11:44:39 -- nvmf/common.sh@446 -- # tail -n +2 00:17:08.797 11:44:39 -- nvmf/common.sh@446 -- # head -n 1 00:17:08.797 11:44:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:08.797 11:44:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:08.797 11:44:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:08.797 11:44:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:08.797 11:44:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:08.797 11:44:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:08.797 11:44:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:08.797 11:44:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.797 11:44:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.797 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.797 11:44:39 -- nvmf/common.sh@469 -- # nvmfpid=3728536 00:17:08.797 11:44:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:08.797 11:44:39 -- nvmf/common.sh@470 -- # waitforlisten 3728536 00:17:08.797 11:44:39 -- common/autotest_common.sh@829 -- # '[' -z 3728536 ']' 00:17:08.797 11:44:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.797 11:44:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.797 11:44:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.797 11:44:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.797 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.797 [2024-12-03 11:44:39.341762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.797 [2024-12-03 11:44:39.341809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.797 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.056 [2024-12-03 11:44:39.412470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.056 [2024-12-03 11:44:39.486817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.056 [2024-12-03 11:44:39.486925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.056 [2024-12-03 11:44:39.486936] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.056 [2024-12-03 11:44:39.486945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.056 [2024-12-03 11:44:39.486990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.056 [2024-12-03 11:44:39.487010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.056 [2024-12-03 11:44:39.487117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.056 [2024-12-03 11:44:39.487120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.625 11:44:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.625 11:44:40 -- common/autotest_common.sh@862 -- # return 0 00:17:09.625 11:44:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.625 11:44:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.625 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.625 11:44:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.625 11:44:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:09.625 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.625 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 [2024-12-03 11:44:40.244668] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e68090/0x1e6c580) succeed. 00:17:09.884 [2024-12-03 11:44:40.253889] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e69680/0x1eadc20) succeed. 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 Malloc0 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 Malloc1 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 [2024-12-03 11:44:40.450327] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:09.884 11:44:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.884 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.884 11:44:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.884 11:44:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:10.144 00:17:10.144 Discovery Log Number of Records 2, Generation counter 2 00:17:10.144 =====Discovery Log Entry 0====== 00:17:10.144 trtype: rdma 00:17:10.144 adrfam: ipv4 00:17:10.144 subtype: current discovery subsystem 00:17:10.144 treq: not required 00:17:10.144 portid: 0 00:17:10.144 trsvcid: 4420 00:17:10.144 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:10.144 traddr: 192.168.100.8 00:17:10.144 eflags: explicit discovery connections, duplicate discovery information 00:17:10.144 rdma_prtype: not specified 00:17:10.144 rdma_qptype: connected 00:17:10.144 rdma_cms: rdma-cm 00:17:10.144 rdma_pkey: 0x0000 00:17:10.144 =====Discovery Log Entry 1====== 00:17:10.144 trtype: rdma 00:17:10.144 adrfam: ipv4 00:17:10.144 subtype: nvme subsystem 00:17:10.144 treq: not required 00:17:10.144 portid: 0 00:17:10.144 trsvcid: 4420 00:17:10.144 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:10.144 traddr: 192.168.100.8 00:17:10.144 eflags: none 00:17:10.144 rdma_prtype: not specified 00:17:10.144 rdma_qptype: connected 00:17:10.144 rdma_cms: rdma-cm 00:17:10.144 rdma_pkey: 0x0000 00:17:10.144 11:44:40 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:10.144 11:44:40 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:10.144 11:44:40 -- nvmf/common.sh@510 -- # local dev _ 00:17:10.144 11:44:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:10.144 11:44:40 -- nvmf/common.sh@509 -- # nvme list 00:17:10.144 11:44:40 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:10.144 11:44:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:10.144 11:44:40 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:10.144 11:44:40 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:10.144 11:44:40 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:10.144 11:44:40 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:11.081 11:44:41 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:11.081 11:44:41 -- common/autotest_common.sh@1187 -- # local i=0 00:17:11.081 11:44:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.081 11:44:41 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:11.081 11:44:41 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:11.081 11:44:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:12.988 11:44:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:12.988 11:44:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:12.988 11:44:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.988 11:44:43 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:12.988 11:44:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.988 11:44:43 -- common/autotest_common.sh@1197 -- # return 0 00:17:12.988 11:44:43 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:12.988 11:44:43 -- nvmf/common.sh@510 -- # local dev _ 00:17:12.988 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:12.988 11:44:43 -- nvmf/common.sh@509 -- # nvme list 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:13.248 /dev/nvme0n2 ]] 00:17:13.248 11:44:43 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:13.248 11:44:43 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:13.248 11:44:43 -- nvmf/common.sh@510 -- # local dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@509 -- # nvme list 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:13.248 11:44:43 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:13.248 11:44:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:13.248 11:44:43 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:13.248 11:44:43 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.186 11:44:44 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.186 11:44:44 -- common/autotest_common.sh@1208 -- # local i=0 00:17:14.186 11:44:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:14.186 11:44:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.186 11:44:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:14.186 11:44:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.186 11:44:44 -- common/autotest_common.sh@1220 -- # return 0 00:17:14.186 11:44:44 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:14.186 11:44:44 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.186 11:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.186 11:44:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.186 11:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.186 11:44:44 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:14.186 11:44:44 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:14.186 11:44:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:14.186 11:44:44 -- nvmf/common.sh@116 -- # sync 00:17:14.186 11:44:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:14.186 11:44:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:14.186 11:44:44 -- nvmf/common.sh@119 -- # set +e 00:17:14.186 11:44:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:14.186 11:44:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:14.186 rmmod nvme_rdma 00:17:14.186 rmmod nvme_fabrics 00:17:14.186 11:44:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:14.186 11:44:44 -- nvmf/common.sh@123 -- # set -e 00:17:14.186 11:44:44 -- nvmf/common.sh@124 -- # return 0 00:17:14.186 11:44:44 -- nvmf/common.sh@477 -- # '[' -n 3728536 ']' 00:17:14.186 11:44:44 -- nvmf/common.sh@478 -- # killprocess 3728536 00:17:14.186 11:44:44 -- common/autotest_common.sh@936 -- # '[' -z 3728536 ']' 00:17:14.186 11:44:44 -- common/autotest_common.sh@940 -- # kill -0 3728536 00:17:14.186 11:44:44 -- common/autotest_common.sh@941 -- # uname 00:17:14.186 11:44:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.186 11:44:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3728536 00:17:14.186 11:44:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:14.186 11:44:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:14.186 11:44:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3728536' 00:17:14.186 killing process with pid 3728536 00:17:14.186 11:44:44 -- common/autotest_common.sh@955 -- # kill 3728536 00:17:14.186 11:44:44 -- common/autotest_common.sh@960 -- # wait 3728536 00:17:14.755 11:44:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:14.755 11:44:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:14.755 00:17:14.755 real 0m12.804s 00:17:14.755 user 0m24.142s 00:17:14.755 sys 0m5.860s 00:17:14.755 11:44:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:14.755 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:14.755 ************************************ 00:17:14.755 END TEST nvmf_nvme_cli 00:17:14.755 ************************************ 00:17:14.755 11:44:45 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:14.755 11:44:45 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:14.755 11:44:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.755 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:14.755 ************************************ 00:17:14.755 START TEST nvmf_host_management 00:17:14.755 ************************************ 00:17:14.755 11:44:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:14.755 * Looking for test storage... 00:17:14.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:14.755 11:44:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:14.755 11:44:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:14.755 11:44:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:14.755 11:44:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:14.755 11:44:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:14.755 11:44:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:14.755 11:44:45 -- scripts/common.sh@335 -- # IFS=.-: 00:17:14.755 11:44:45 -- scripts/common.sh@335 -- # read -ra ver1 00:17:14.755 11:44:45 -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.755 11:44:45 -- scripts/common.sh@336 -- # read -ra ver2 00:17:14.755 11:44:45 -- scripts/common.sh@337 -- # local 'op=<' 00:17:14.755 11:44:45 -- scripts/common.sh@339 -- # ver1_l=2 00:17:14.755 11:44:45 -- scripts/common.sh@340 -- # ver2_l=1 00:17:14.755 11:44:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:14.755 11:44:45 -- scripts/common.sh@343 -- # case "$op" in 00:17:14.755 11:44:45 -- scripts/common.sh@344 -- # : 1 00:17:14.755 11:44:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:14.755 11:44:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.755 11:44:45 -- scripts/common.sh@364 -- # decimal 1 00:17:14.755 11:44:45 -- scripts/common.sh@352 -- # local d=1 00:17:14.755 11:44:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.755 11:44:45 -- scripts/common.sh@354 -- # echo 1 00:17:14.755 11:44:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:14.755 11:44:45 -- scripts/common.sh@365 -- # decimal 2 00:17:14.755 11:44:45 -- scripts/common.sh@352 -- # local d=2 00:17:14.755 11:44:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.755 11:44:45 -- scripts/common.sh@354 -- # echo 2 00:17:14.755 11:44:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:14.755 11:44:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:14.755 11:44:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:14.755 11:44:45 -- scripts/common.sh@367 -- # return 0 00:17:14.755 11:44:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:14.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.755 --rc genhtml_branch_coverage=1 00:17:14.755 --rc genhtml_function_coverage=1 00:17:14.755 --rc genhtml_legend=1 00:17:14.755 --rc geninfo_all_blocks=1 00:17:14.755 --rc geninfo_unexecuted_blocks=1 00:17:14.755 00:17:14.755 ' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:14.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.755 --rc genhtml_branch_coverage=1 00:17:14.755 --rc genhtml_function_coverage=1 00:17:14.755 --rc genhtml_legend=1 00:17:14.755 --rc geninfo_all_blocks=1 00:17:14.755 --rc geninfo_unexecuted_blocks=1 00:17:14.755 00:17:14.755 ' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:14.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.755 --rc genhtml_branch_coverage=1 00:17:14.755 --rc genhtml_function_coverage=1 00:17:14.755 --rc genhtml_legend=1 00:17:14.755 --rc geninfo_all_blocks=1 00:17:14.755 --rc geninfo_unexecuted_blocks=1 00:17:14.755 00:17:14.755 ' 00:17:14.755 11:44:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:14.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.755 --rc genhtml_branch_coverage=1 00:17:14.755 --rc genhtml_function_coverage=1 00:17:14.755 --rc genhtml_legend=1 00:17:14.755 --rc geninfo_all_blocks=1 00:17:14.755 --rc geninfo_unexecuted_blocks=1 00:17:14.755 00:17:14.755 ' 00:17:14.755 11:44:45 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.755 11:44:45 -- nvmf/common.sh@7 -- # uname -s 00:17:14.755 11:44:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.755 11:44:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.755 11:44:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.755 11:44:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.755 11:44:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.755 11:44:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.755 11:44:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.755 11:44:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.755 11:44:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.755 11:44:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.755 11:44:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:14.755 11:44:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:14.755 11:44:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.755 11:44:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.755 11:44:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.755 11:44:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:14.755 11:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.755 11:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.755 11:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.755 11:44:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.755 11:44:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.755 11:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.755 11:44:45 -- paths/export.sh@5 -- # export PATH 00:17:14.755 11:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.755 11:44:45 -- nvmf/common.sh@46 -- # : 0 00:17:14.755 11:44:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:14.755 11:44:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:14.755 11:44:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:14.755 11:44:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.755 11:44:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.755 11:44:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:14.755 11:44:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:14.755 11:44:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:14.755 11:44:45 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.755 11:44:45 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.755 11:44:45 -- target/host_management.sh@104 -- # nvmftestinit 00:17:14.755 11:44:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:14.755 11:44:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.755 11:44:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:14.755 11:44:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:14.755 11:44:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:14.755 11:44:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.755 11:44:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.755 11:44:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.755 11:44:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:14.755 11:44:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:14.755 11:44:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:14.755 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.325 11:44:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:21.325 11:44:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:21.325 11:44:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:21.325 11:44:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:21.325 11:44:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:21.325 11:44:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:21.325 11:44:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:21.325 11:44:51 -- nvmf/common.sh@294 -- # net_devs=() 00:17:21.325 11:44:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:21.325 11:44:51 -- nvmf/common.sh@295 -- # e810=() 00:17:21.325 11:44:51 -- nvmf/common.sh@295 -- # local -ga e810 00:17:21.325 11:44:51 -- nvmf/common.sh@296 -- # x722=() 00:17:21.325 11:44:51 -- nvmf/common.sh@296 -- # local -ga x722 00:17:21.325 11:44:51 -- nvmf/common.sh@297 -- # mlx=() 00:17:21.325 11:44:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:21.325 11:44:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.325 11:44:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.325 11:44:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.325 11:44:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.325 11:44:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.326 11:44:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:21.326 11:44:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.326 11:44:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:21.326 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:21.326 11:44:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.326 11:44:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.326 11:44:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:21.326 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:21.326 11:44:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.326 11:44:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:21.326 11:44:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.326 11:44:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.326 11:44:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.326 11:44:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.326 11:44:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:21.326 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:21.326 11:44:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.326 11:44:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.326 11:44:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.326 11:44:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.326 11:44:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:21.326 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:21.326 11:44:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.326 11:44:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:21.326 11:44:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:21.326 11:44:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:21.326 11:44:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:21.326 11:44:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:21.326 11:44:51 -- nvmf/common.sh@57 -- # uname 00:17:21.587 11:44:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:21.587 11:44:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:21.587 11:44:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:21.587 11:44:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:21.587 11:44:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:21.587 11:44:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:21.587 11:44:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:21.587 11:44:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:21.587 11:44:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:21.587 11:44:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:21.587 11:44:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:21.587 11:44:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.587 11:44:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:21.587 11:44:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:21.587 11:44:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.587 11:44:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:21.587 11:44:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@104 -- # continue 2 00:17:21.587 11:44:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@104 -- # continue 2 00:17:21.587 11:44:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:21.587 11:44:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:21.587 11:44:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:21.587 11:44:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:21.587 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.587 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:21.587 altname enp217s0f0np0 00:17:21.587 altname ens818f0np0 00:17:21.587 inet 192.168.100.8/24 scope global mlx_0_0 00:17:21.587 valid_lft forever preferred_lft forever 00:17:21.587 11:44:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:21.587 11:44:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:21.587 11:44:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:21.587 11:44:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:21.587 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.587 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:21.587 altname enp217s0f1np1 00:17:21.587 altname ens818f1np1 00:17:21.587 inet 192.168.100.9/24 scope global mlx_0_1 00:17:21.587 valid_lft forever preferred_lft forever 00:17:21.587 11:44:52 -- nvmf/common.sh@410 -- # return 0 00:17:21.587 11:44:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:21.587 11:44:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:21.587 11:44:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:21.587 11:44:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:21.587 11:44:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.587 11:44:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:21.587 11:44:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:21.587 11:44:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.587 11:44:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:21.587 11:44:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@104 -- # continue 2 00:17:21.587 11:44:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.587 11:44:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.587 11:44:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@104 -- # continue 2 00:17:21.587 11:44:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:21.587 11:44:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:21.587 11:44:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:21.587 11:44:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:21.587 11:44:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:21.587 11:44:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:21.587 192.168.100.9' 00:17:21.587 11:44:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:21.587 192.168.100.9' 00:17:21.587 11:44:52 -- nvmf/common.sh@445 -- # head -n 1 00:17:21.587 11:44:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:21.587 11:44:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:21.587 192.168.100.9' 00:17:21.587 11:44:52 -- nvmf/common.sh@446 -- # tail -n +2 00:17:21.587 11:44:52 -- nvmf/common.sh@446 -- # head -n 1 00:17:21.587 11:44:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:21.587 11:44:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:21.587 11:44:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:21.587 11:44:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:21.587 11:44:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:21.587 11:44:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:21.587 11:44:52 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:21.587 11:44:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:21.587 11:44:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.587 11:44:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.587 ************************************ 00:17:21.587 START TEST nvmf_host_management 00:17:21.588 ************************************ 00:17:21.588 11:44:52 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:21.588 11:44:52 -- target/host_management.sh@69 -- # starttarget 00:17:21.588 11:44:52 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:21.588 11:44:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.588 11:44:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.588 11:44:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.588 11:44:52 -- nvmf/common.sh@469 -- # nvmfpid=3732847 00:17:21.588 11:44:52 -- nvmf/common.sh@470 -- # waitforlisten 3732847 00:17:21.588 11:44:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:21.588 11:44:52 -- common/autotest_common.sh@829 -- # '[' -z 3732847 ']' 00:17:21.588 11:44:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.588 11:44:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.588 11:44:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.588 11:44:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.588 11:44:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.847 [2024-12-03 11:44:52.239081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.847 [2024-12-03 11:44:52.239154] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.847 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.847 [2024-12-03 11:44:52.312186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.848 [2024-12-03 11:44:52.386294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.848 [2024-12-03 11:44:52.386401] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.848 [2024-12-03 11:44:52.386410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.848 [2024-12-03 11:44:52.386419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.848 [2024-12-03 11:44:52.386467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.848 [2024-12-03 11:44:52.386564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.848 [2024-12-03 11:44:52.386983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.848 [2024-12-03 11:44:52.386983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.787 11:44:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.787 11:44:53 -- common/autotest_common.sh@862 -- # return 0 00:17:22.787 11:44:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.787 11:44:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 11:44:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.787 11:44:53 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:22.787 11:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 [2024-12-03 11:44:53.145972] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1282380/0x1286870) succeed. 00:17:22.787 [2024-12-03 11:44:53.155147] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1283970/0x12c7f10) succeed. 00:17:22.787 11:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.787 11:44:53 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:22.787 11:44:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 11:44:53 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.787 11:44:53 -- target/host_management.sh@23 -- # cat 00:17:22.787 11:44:53 -- target/host_management.sh@30 -- # rpc_cmd 00:17:22.787 11:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 Malloc0 00:17:22.787 [2024-12-03 11:44:53.333298] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.787 11:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.787 11:44:53 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:22.787 11:44:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 11:44:53 -- target/host_management.sh@73 -- # perfpid=3733152 00:17:22.787 11:44:53 -- target/host_management.sh@74 -- # waitforlisten 3733152 /var/tmp/bdevperf.sock 00:17:22.787 11:44:53 -- common/autotest_common.sh@829 -- # '[' -z 3733152 ']' 00:17:22.787 11:44:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.787 11:44:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.787 11:44:53 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:22.787 11:44:53 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:22.787 11:44:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.787 11:44:53 -- nvmf/common.sh@520 -- # config=() 00:17:22.787 11:44:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.787 11:44:53 -- nvmf/common.sh@520 -- # local subsystem config 00:17:22.787 11:44:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.787 11:44:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:22.787 11:44:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:22.787 { 00:17:22.787 "params": { 00:17:22.787 "name": "Nvme$subsystem", 00:17:22.787 "trtype": "$TEST_TRANSPORT", 00:17:22.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.787 "adrfam": "ipv4", 00:17:22.787 "trsvcid": "$NVMF_PORT", 00:17:22.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.787 "hdgst": ${hdgst:-false}, 00:17:22.787 "ddgst": ${ddgst:-false} 00:17:22.787 }, 00:17:22.787 "method": "bdev_nvme_attach_controller" 00:17:22.787 } 00:17:22.787 EOF 00:17:22.787 )") 00:17:22.787 11:44:53 -- nvmf/common.sh@542 -- # cat 00:17:23.047 11:44:53 -- nvmf/common.sh@544 -- # jq . 00:17:23.047 11:44:53 -- nvmf/common.sh@545 -- # IFS=, 00:17:23.047 11:44:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:23.047 "params": { 00:17:23.047 "name": "Nvme0", 00:17:23.047 "trtype": "rdma", 00:17:23.047 "traddr": "192.168.100.8", 00:17:23.047 "adrfam": "ipv4", 00:17:23.047 "trsvcid": "4420", 00:17:23.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:23.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:23.047 "hdgst": false, 00:17:23.047 "ddgst": false 00:17:23.047 }, 00:17:23.047 "method": "bdev_nvme_attach_controller" 00:17:23.047 }' 00:17:23.047 [2024-12-03 11:44:53.431628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.047 [2024-12-03 11:44:53.431679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733152 ] 00:17:23.047 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.047 [2024-12-03 11:44:53.502190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.047 [2024-12-03 11:44:53.570134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.306 Running I/O for 10 seconds... 00:17:23.876 11:44:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.876 11:44:54 -- common/autotest_common.sh@862 -- # return 0 00:17:23.876 11:44:54 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:23.876 11:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.876 11:44:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.876 11:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.876 11:44:54 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.876 11:44:54 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:23.876 11:44:54 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:23.876 11:44:54 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:23.876 11:44:54 -- target/host_management.sh@52 -- # local ret=1 00:17:23.876 11:44:54 -- target/host_management.sh@53 -- # local i 00:17:23.876 11:44:54 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:23.876 11:44:54 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:23.876 11:44:54 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:23.876 11:44:54 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:23.876 11:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.876 11:44:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.876 11:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.876 11:44:54 -- target/host_management.sh@55 -- # read_io_count=3046 00:17:23.876 11:44:54 -- target/host_management.sh@58 -- # '[' 3046 -ge 100 ']' 00:17:23.876 11:44:54 -- target/host_management.sh@59 -- # ret=0 00:17:23.876 11:44:54 -- target/host_management.sh@60 -- # break 00:17:23.876 11:44:54 -- target/host_management.sh@64 -- # return 0 00:17:23.876 11:44:54 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:23.876 11:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.876 11:44:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.876 11:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.876 11:44:54 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:23.876 11:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.876 11:44:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.876 11:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.876 11:44:54 -- target/host_management.sh@87 -- # sleep 1 00:17:24.815 [2024-12-03 11:44:55.332142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:17:24.815 [2024-12-03 11:44:55.332405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:24.815 [2024-12-03 11:44:55.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:24.815 [2024-12-03 11:44:55.332527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:24.815 [2024-12-03 11:44:55.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:24.815 [2024-12-03 11:44:55.332567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:17:24.815 [2024-12-03 11:44:55.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:24.815 [2024-12-03 11:44:55.332626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000136d7000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x182300 00:17:24.815 [2024-12-03 11:44:55.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.815 [2024-12-03 11:44:55.332816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.332981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x182300 00:17:24.816 [2024-12-03 11:44:55.332990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:24.816 [2024-12-03 11:44:55.333028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:24.816 [2024-12-03 11:44:55.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:24.816 [2024-12-03 11:44:55.333107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:17:24.816 [2024-12-03 11:44:55.333169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:24.816 [2024-12-03 11:44:55.333210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:24.816 [2024-12-03 11:44:55.333229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:24.816 [2024-12-03 11:44:55.333343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:24.816 [2024-12-03 11:44:55.333362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:17:24.816 [2024-12-03 11:44:55.333402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:24.816 [2024-12-03 11:44:55.333422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:24.816 [2024-12-03 11:44:55.333442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:95aa000 sqhd:5310 p:0 m:0 dnr:0 00:17:24.816 [2024-12-03 11:44:55.335479] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:24.816 [2024-12-03 11:44:55.336362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:24.816 task offset: 21376 on job bdev=Nvme0n1 fails 00:17:24.816 00:17:24.816 Latency(us) 00:17:24.816 [2024-12-03T10:44:55.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.816 [2024-12-03T10:44:55.430Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:24.816 [2024-12-03T10:44:55.430Z] Job: Nvme0n1 ended in about 1.59 seconds with error 00:17:24.816 Verification LBA range: start 0x0 length 0x400 00:17:24.816 Nvme0n1 : 1.59 2042.53 127.66 40.31 0.00 30526.92 2739.40 1013343.85 00:17:24.816 [2024-12-03T10:44:55.430Z] =================================================================================================================== 00:17:24.816 [2024-12-03T10:44:55.430Z] Total : 2042.53 127.66 40.31 0.00 30526.92 2739.40 1013343.85 00:17:24.816 [2024-12-03 11:44:55.338029] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:24.816 11:44:55 -- target/host_management.sh@91 -- # kill -9 3733152 00:17:24.817 11:44:55 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:24.817 11:44:55 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:24.817 11:44:55 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:24.817 11:44:55 -- nvmf/common.sh@520 -- # config=() 00:17:24.817 11:44:55 -- nvmf/common.sh@520 -- # local subsystem config 00:17:24.817 11:44:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:24.817 11:44:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:24.817 { 00:17:24.817 "params": { 00:17:24.817 "name": "Nvme$subsystem", 00:17:24.817 "trtype": "$TEST_TRANSPORT", 00:17:24.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.817 "adrfam": "ipv4", 00:17:24.817 "trsvcid": "$NVMF_PORT", 00:17:24.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.817 "hdgst": ${hdgst:-false}, 00:17:24.817 "ddgst": ${ddgst:-false} 00:17:24.817 }, 00:17:24.817 "method": "bdev_nvme_attach_controller" 00:17:24.817 } 00:17:24.817 EOF 00:17:24.817 )") 00:17:24.817 11:44:55 -- nvmf/common.sh@542 -- # cat 00:17:24.817 11:44:55 -- nvmf/common.sh@544 -- # jq . 00:17:24.817 11:44:55 -- nvmf/common.sh@545 -- # IFS=, 00:17:24.817 11:44:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:24.817 "params": { 00:17:24.817 "name": "Nvme0", 00:17:24.817 "trtype": "rdma", 00:17:24.817 "traddr": "192.168.100.8", 00:17:24.817 "adrfam": "ipv4", 00:17:24.817 "trsvcid": "4420", 00:17:24.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:24.817 "hdgst": false, 00:17:24.817 "ddgst": false 00:17:24.817 }, 00:17:24.817 "method": "bdev_nvme_attach_controller" 00:17:24.817 }' 00:17:24.817 [2024-12-03 11:44:55.389397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.817 [2024-12-03 11:44:55.389448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733443 ] 00:17:24.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.075 [2024-12-03 11:44:55.458810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.075 [2024-12-03 11:44:55.526343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.335 Running I/O for 1 seconds... 00:17:26.273 00:17:26.273 Latency(us) 00:17:26.273 [2024-12-03T10:44:56.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.273 [2024-12-03T10:44:56.887Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:26.273 Verification LBA range: start 0x0 length 0x400 00:17:26.273 Nvme0n1 : 1.00 5595.15 349.70 0.00 0.00 11264.78 550.50 24956.11 00:17:26.273 [2024-12-03T10:44:56.887Z] =================================================================================================================== 00:17:26.273 [2024-12-03T10:44:56.887Z] Total : 5595.15 349.70 0.00 0.00 11264.78 550.50 24956.11 00:17:26.532 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3733152 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:26.532 11:44:56 -- target/host_management.sh@101 -- # stoptarget 00:17:26.532 11:44:56 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:26.532 11:44:56 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:26.532 11:44:56 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:26.532 11:44:56 -- target/host_management.sh@40 -- # nvmftestfini 00:17:26.532 11:44:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.532 11:44:56 -- nvmf/common.sh@116 -- # sync 00:17:26.532 11:44:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:26.532 11:44:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:26.532 11:44:56 -- nvmf/common.sh@119 -- # set +e 00:17:26.532 11:44:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.532 11:44:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:26.532 rmmod nvme_rdma 00:17:26.532 rmmod nvme_fabrics 00:17:26.532 11:44:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.532 11:44:56 -- nvmf/common.sh@123 -- # set -e 00:17:26.532 11:44:56 -- nvmf/common.sh@124 -- # return 0 00:17:26.532 11:44:56 -- nvmf/common.sh@477 -- # '[' -n 3732847 ']' 00:17:26.532 11:44:56 -- nvmf/common.sh@478 -- # killprocess 3732847 00:17:26.532 11:44:56 -- common/autotest_common.sh@936 -- # '[' -z 3732847 ']' 00:17:26.532 11:44:56 -- common/autotest_common.sh@940 -- # kill -0 3732847 00:17:26.532 11:44:56 -- common/autotest_common.sh@941 -- # uname 00:17:26.532 11:44:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.532 11:44:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3732847 00:17:26.532 11:44:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:26.533 11:44:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:26.533 11:44:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3732847' 00:17:26.533 killing process with pid 3732847 00:17:26.533 11:44:57 -- common/autotest_common.sh@955 -- # kill 3732847 00:17:26.533 11:44:57 -- common/autotest_common.sh@960 -- # wait 3732847 00:17:26.792 [2024-12-03 11:44:57.330277] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:26.792 11:44:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.793 11:44:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:26.793 00:17:26.793 real 0m5.166s 00:17:26.793 user 0m23.077s 00:17:26.793 sys 0m1.027s 00:17:26.793 11:44:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:26.793 11:44:57 -- common/autotest_common.sh@10 -- # set +x 00:17:26.793 ************************************ 00:17:26.793 END TEST nvmf_host_management 00:17:26.793 ************************************ 00:17:26.793 11:44:57 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:26.793 00:17:26.793 real 0m12.276s 00:17:26.793 user 0m25.124s 00:17:26.793 sys 0m6.329s 00:17:26.793 11:44:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:26.793 11:44:57 -- common/autotest_common.sh@10 -- # set +x 00:17:26.793 ************************************ 00:17:26.793 END TEST nvmf_host_management 00:17:26.793 ************************************ 00:17:27.052 11:44:57 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:27.052 11:44:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.052 11:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.052 11:44:57 -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 ************************************ 00:17:27.052 START TEST nvmf_lvol 00:17:27.052 ************************************ 00:17:27.052 11:44:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:27.052 * Looking for test storage... 00:17:27.053 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:27.053 11:44:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:27.053 11:44:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:27.053 11:44:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:27.053 11:44:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:27.053 11:44:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:27.053 11:44:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:27.053 11:44:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:27.053 11:44:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:27.053 11:44:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:27.053 11:44:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.053 11:44:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:27.053 11:44:57 -- scripts/common.sh@337 -- # local 'op=<' 00:17:27.053 11:44:57 -- scripts/common.sh@339 -- # ver1_l=2 00:17:27.053 11:44:57 -- scripts/common.sh@340 -- # ver2_l=1 00:17:27.053 11:44:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:27.053 11:44:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:27.053 11:44:57 -- scripts/common.sh@344 -- # : 1 00:17:27.053 11:44:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:27.053 11:44:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.053 11:44:57 -- scripts/common.sh@364 -- # decimal 1 00:17:27.053 11:44:57 -- scripts/common.sh@352 -- # local d=1 00:17:27.053 11:44:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.053 11:44:57 -- scripts/common.sh@354 -- # echo 1 00:17:27.053 11:44:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:27.053 11:44:57 -- scripts/common.sh@365 -- # decimal 2 00:17:27.053 11:44:57 -- scripts/common.sh@352 -- # local d=2 00:17:27.053 11:44:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.053 11:44:57 -- scripts/common.sh@354 -- # echo 2 00:17:27.053 11:44:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:27.053 11:44:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:27.053 11:44:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:27.053 11:44:57 -- scripts/common.sh@367 -- # return 0 00:17:27.053 11:44:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.053 11:44:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 11:44:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 11:44:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 11:44:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.053 --rc genhtml_branch_coverage=1 00:17:27.053 --rc genhtml_function_coverage=1 00:17:27.053 --rc genhtml_legend=1 00:17:27.053 --rc geninfo_all_blocks=1 00:17:27.053 --rc geninfo_unexecuted_blocks=1 00:17:27.053 00:17:27.053 ' 00:17:27.053 11:44:57 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.053 11:44:57 -- nvmf/common.sh@7 -- # uname -s 00:17:27.053 11:44:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.053 11:44:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.053 11:44:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.053 11:44:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.053 11:44:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.053 11:44:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.053 11:44:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.053 11:44:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.053 11:44:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.053 11:44:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.053 11:44:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:27.053 11:44:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:27.053 11:44:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.053 11:44:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.053 11:44:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.053 11:44:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.053 11:44:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.053 11:44:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.053 11:44:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.053 11:44:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.053 11:44:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.053 11:44:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.053 11:44:57 -- paths/export.sh@5 -- # export PATH 00:17:27.053 11:44:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.053 11:44:57 -- nvmf/common.sh@46 -- # : 0 00:17:27.053 11:44:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:27.053 11:44:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:27.053 11:44:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:27.053 11:44:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.053 11:44:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.053 11:44:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:27.054 11:44:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:27.054 11:44:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:27.054 11:44:57 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:27.054 11:44:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:27.054 11:44:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.054 11:44:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:27.054 11:44:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:27.054 11:44:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:27.054 11:44:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.054 11:44:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.054 11:44:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.313 11:44:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:27.313 11:44:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:27.313 11:44:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:27.313 11:44:57 -- common/autotest_common.sh@10 -- # set +x 00:17:33.901 11:45:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:33.901 11:45:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:33.901 11:45:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:33.901 11:45:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:33.901 11:45:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:33.901 11:45:04 -- nvmf/common.sh@294 -- # net_devs=() 00:17:33.901 11:45:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@295 -- # e810=() 00:17:33.901 11:45:04 -- nvmf/common.sh@295 -- # local -ga e810 00:17:33.901 11:45:04 -- nvmf/common.sh@296 -- # x722=() 00:17:33.901 11:45:04 -- nvmf/common.sh@296 -- # local -ga x722 00:17:33.901 11:45:04 -- nvmf/common.sh@297 -- # mlx=() 00:17:33.901 11:45:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:33.901 11:45:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.901 11:45:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.901 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.901 11:45:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.901 11:45:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.901 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.901 11:45:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.901 11:45:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.901 11:45:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.901 11:45:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.901 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.901 11:45:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.901 11:45:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.901 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.901 11:45:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:33.901 11:45:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:33.901 11:45:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:33.901 11:45:04 -- nvmf/common.sh@57 -- # uname 00:17:33.901 11:45:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:33.901 11:45:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:33.901 11:45:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:33.901 11:45:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:33.901 11:45:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:33.901 11:45:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:33.901 11:45:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:33.901 11:45:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:33.901 11:45:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:33.901 11:45:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.901 11:45:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:33.901 11:45:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.901 11:45:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.901 11:45:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@104 -- # continue 2 00:17:33.901 11:45:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@104 -- # continue 2 00:17:33.901 11:45:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.901 11:45:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.901 11:45:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:33.901 11:45:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:33.901 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.901 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.901 altname enp217s0f0np0 00:17:33.901 altname ens818f0np0 00:17:33.901 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.901 valid_lft forever preferred_lft forever 00:17:33.901 11:45:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.901 11:45:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.901 11:45:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.901 11:45:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:33.901 11:45:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:33.901 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.901 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.901 altname enp217s0f1np1 00:17:33.901 altname ens818f1np1 00:17:33.901 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.901 valid_lft forever preferred_lft forever 00:17:33.901 11:45:04 -- nvmf/common.sh@410 -- # return 0 00:17:33.901 11:45:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.901 11:45:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.901 11:45:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:33.901 11:45:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:33.901 11:45:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.901 11:45:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.901 11:45:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.901 11:45:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.901 11:45:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.901 11:45:04 -- nvmf/common.sh@104 -- # continue 2 00:17:33.901 11:45:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.901 11:45:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.901 11:45:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.901 11:45:04 -- nvmf/common.sh@104 -- # continue 2 00:17:33.902 11:45:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.902 11:45:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:33.902 11:45:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.902 11:45:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.902 11:45:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:33.902 11:45:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.902 11:45:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.902 11:45:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.902 192.168.100.9' 00:17:33.902 11:45:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:33.902 192.168.100.9' 00:17:33.902 11:45:04 -- nvmf/common.sh@445 -- # head -n 1 00:17:33.902 11:45:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.902 11:45:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:33.902 192.168.100.9' 00:17:33.902 11:45:04 -- nvmf/common.sh@446 -- # tail -n +2 00:17:33.902 11:45:04 -- nvmf/common.sh@446 -- # head -n 1 00:17:33.902 11:45:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.902 11:45:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:33.902 11:45:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.902 11:45:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:33.902 11:45:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:33.902 11:45:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:34.188 11:45:04 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:34.188 11:45:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.188 11:45:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.188 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:17:34.188 11:45:04 -- nvmf/common.sh@469 -- # nvmfpid=3737472 00:17:34.188 11:45:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:34.188 11:45:04 -- nvmf/common.sh@470 -- # waitforlisten 3737472 00:17:34.188 11:45:04 -- common/autotest_common.sh@829 -- # '[' -z 3737472 ']' 00:17:34.188 11:45:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.188 11:45:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.188 11:45:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.188 11:45:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.188 11:45:04 -- common/autotest_common.sh@10 -- # set +x 00:17:34.188 [2024-12-03 11:45:04.575762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.188 [2024-12-03 11:45:04.575811] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.188 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.188 [2024-12-03 11:45:04.643821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.188 [2024-12-03 11:45:04.712242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.188 [2024-12-03 11:45:04.712370] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.188 [2024-12-03 11:45:04.712381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.188 [2024-12-03 11:45:04.712390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.188 [2024-12-03 11:45:04.712450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.189 [2024-12-03 11:45:04.712548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.189 [2024-12-03 11:45:04.712550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.122 11:45:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.122 11:45:05 -- common/autotest_common.sh@862 -- # return 0 00:17:35.122 11:45:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.122 11:45:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.122 11:45:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.122 11:45:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.122 11:45:05 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:35.122 [2024-12-03 11:45:05.645866] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe2c560/0xe30a50) succeed. 00:17:35.122 [2024-12-03 11:45:05.655002] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe2dab0/0xe720f0) succeed. 00:17:35.380 11:45:05 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.380 11:45:05 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:35.380 11:45:05 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.639 11:45:06 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:35.639 11:45:06 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:35.898 11:45:06 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:36.157 11:45:06 -- target/nvmf_lvol.sh@29 -- # lvs=673ff590-8d27-4c8a-a4bc-b71badae1a08 00:17:36.157 11:45:06 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 673ff590-8d27-4c8a-a4bc-b71badae1a08 lvol 20 00:17:36.157 11:45:06 -- target/nvmf_lvol.sh@32 -- # lvol=c30824cb-3e14-4076-821a-a55f031761f0 00:17:36.157 11:45:06 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:36.416 11:45:06 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c30824cb-3e14-4076-821a-a55f031761f0 00:17:36.675 11:45:07 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:36.675 [2024-12-03 11:45:07.277391] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:36.933 11:45:07 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:36.933 11:45:07 -- target/nvmf_lvol.sh@42 -- # perf_pid=3738300 00:17:36.933 11:45:07 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:36.933 11:45:07 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:36.933 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.310 11:45:08 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c30824cb-3e14-4076-821a-a55f031761f0 MY_SNAPSHOT 00:17:38.310 11:45:08 -- target/nvmf_lvol.sh@47 -- # snapshot=41af9906-bce6-43c5-b668-93051320be3a 00:17:38.310 11:45:08 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c30824cb-3e14-4076-821a-a55f031761f0 30 00:17:38.310 11:45:08 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 41af9906-bce6-43c5-b668-93051320be3a MY_CLONE 00:17:38.569 11:45:09 -- target/nvmf_lvol.sh@49 -- # clone=9a16aa71-b054-43dc-bba6-289751071f59 00:17:38.569 11:45:09 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9a16aa71-b054-43dc-bba6-289751071f59 00:17:38.828 11:45:09 -- target/nvmf_lvol.sh@53 -- # wait 3738300 00:17:48.801 Initializing NVMe Controllers 00:17:48.801 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:48.801 Controller IO queue size 128, less than required. 00:17:48.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.801 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:48.801 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:48.801 Initialization complete. Launching workers. 00:17:48.801 ======================================================== 00:17:48.801 Latency(us) 00:17:48.801 Device Information : IOPS MiB/s Average min max 00:17:48.801 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17239.90 67.34 7426.66 2477.74 35904.68 00:17:48.801 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17088.30 66.75 7491.99 3498.53 33455.93 00:17:48.801 ======================================================== 00:17:48.801 Total : 34328.19 134.09 7459.18 2477.74 35904.68 00:17:48.801 00:17:48.801 11:45:18 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:48.801 11:45:19 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c30824cb-3e14-4076-821a-a55f031761f0 00:17:48.801 11:45:19 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 673ff590-8d27-4c8a-a4bc-b71badae1a08 00:17:49.059 11:45:19 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:49.059 11:45:19 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:49.059 11:45:19 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:49.059 11:45:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:49.059 11:45:19 -- nvmf/common.sh@116 -- # sync 00:17:49.059 11:45:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:49.059 11:45:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:49.059 11:45:19 -- nvmf/common.sh@119 -- # set +e 00:17:49.059 11:45:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:49.059 11:45:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:49.059 rmmod nvme_rdma 00:17:49.059 rmmod nvme_fabrics 00:17:49.059 11:45:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:49.059 11:45:19 -- nvmf/common.sh@123 -- # set -e 00:17:49.059 11:45:19 -- nvmf/common.sh@124 -- # return 0 00:17:49.059 11:45:19 -- nvmf/common.sh@477 -- # '[' -n 3737472 ']' 00:17:49.059 11:45:19 -- nvmf/common.sh@478 -- # killprocess 3737472 00:17:49.059 11:45:19 -- common/autotest_common.sh@936 -- # '[' -z 3737472 ']' 00:17:49.059 11:45:19 -- common/autotest_common.sh@940 -- # kill -0 3737472 00:17:49.059 11:45:19 -- common/autotest_common.sh@941 -- # uname 00:17:49.059 11:45:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.059 11:45:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3737472 00:17:49.059 11:45:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.059 11:45:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.059 11:45:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3737472' 00:17:49.059 killing process with pid 3737472 00:17:49.059 11:45:19 -- common/autotest_common.sh@955 -- # kill 3737472 00:17:49.059 11:45:19 -- common/autotest_common.sh@960 -- # wait 3737472 00:17:49.317 11:45:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:49.317 11:45:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:49.317 00:17:49.317 real 0m22.422s 00:17:49.317 user 1m11.951s 00:17:49.317 sys 0m6.478s 00:17:49.317 11:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:49.317 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:49.317 ************************************ 00:17:49.317 END TEST nvmf_lvol 00:17:49.317 ************************************ 00:17:49.317 11:45:19 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:49.317 11:45:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:49.317 11:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:49.317 11:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:49.317 ************************************ 00:17:49.317 START TEST nvmf_lvs_grow 00:17:49.317 ************************************ 00:17:49.317 11:45:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:49.577 * Looking for test storage... 00:17:49.577 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:49.577 11:45:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:49.577 11:45:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:49.577 11:45:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:49.577 11:45:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:49.577 11:45:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:49.577 11:45:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:49.577 11:45:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:49.577 11:45:20 -- scripts/common.sh@335 -- # IFS=.-: 00:17:49.577 11:45:20 -- scripts/common.sh@335 -- # read -ra ver1 00:17:49.577 11:45:20 -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.577 11:45:20 -- scripts/common.sh@336 -- # read -ra ver2 00:17:49.577 11:45:20 -- scripts/common.sh@337 -- # local 'op=<' 00:17:49.577 11:45:20 -- scripts/common.sh@339 -- # ver1_l=2 00:17:49.577 11:45:20 -- scripts/common.sh@340 -- # ver2_l=1 00:17:49.577 11:45:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:49.577 11:45:20 -- scripts/common.sh@343 -- # case "$op" in 00:17:49.577 11:45:20 -- scripts/common.sh@344 -- # : 1 00:17:49.577 11:45:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:49.577 11:45:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.577 11:45:20 -- scripts/common.sh@364 -- # decimal 1 00:17:49.577 11:45:20 -- scripts/common.sh@352 -- # local d=1 00:17:49.577 11:45:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.577 11:45:20 -- scripts/common.sh@354 -- # echo 1 00:17:49.577 11:45:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:49.577 11:45:20 -- scripts/common.sh@365 -- # decimal 2 00:17:49.577 11:45:20 -- scripts/common.sh@352 -- # local d=2 00:17:49.577 11:45:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.577 11:45:20 -- scripts/common.sh@354 -- # echo 2 00:17:49.577 11:45:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:49.577 11:45:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:49.577 11:45:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:49.577 11:45:20 -- scripts/common.sh@367 -- # return 0 00:17:49.577 11:45:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.577 11:45:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:49.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.577 --rc genhtml_branch_coverage=1 00:17:49.577 --rc genhtml_function_coverage=1 00:17:49.577 --rc genhtml_legend=1 00:17:49.577 --rc geninfo_all_blocks=1 00:17:49.577 --rc geninfo_unexecuted_blocks=1 00:17:49.577 00:17:49.577 ' 00:17:49.577 11:45:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:49.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.577 --rc genhtml_branch_coverage=1 00:17:49.577 --rc genhtml_function_coverage=1 00:17:49.577 --rc genhtml_legend=1 00:17:49.577 --rc geninfo_all_blocks=1 00:17:49.577 --rc geninfo_unexecuted_blocks=1 00:17:49.577 00:17:49.577 ' 00:17:49.577 11:45:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:49.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.577 --rc genhtml_branch_coverage=1 00:17:49.577 --rc genhtml_function_coverage=1 00:17:49.577 --rc genhtml_legend=1 00:17:49.577 --rc geninfo_all_blocks=1 00:17:49.577 --rc geninfo_unexecuted_blocks=1 00:17:49.577 00:17:49.577 ' 00:17:49.577 11:45:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:49.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.577 --rc genhtml_branch_coverage=1 00:17:49.577 --rc genhtml_function_coverage=1 00:17:49.577 --rc genhtml_legend=1 00:17:49.577 --rc geninfo_all_blocks=1 00:17:49.577 --rc geninfo_unexecuted_blocks=1 00:17:49.577 00:17:49.577 ' 00:17:49.577 11:45:20 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.577 11:45:20 -- nvmf/common.sh@7 -- # uname -s 00:17:49.577 11:45:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.577 11:45:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.577 11:45:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.577 11:45:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.578 11:45:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.578 11:45:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.578 11:45:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.578 11:45:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.578 11:45:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.578 11:45:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.578 11:45:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:49.578 11:45:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:49.578 11:45:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.578 11:45:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.578 11:45:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.578 11:45:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:49.578 11:45:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.578 11:45:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.578 11:45:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.578 11:45:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.578 11:45:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.578 11:45:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.578 11:45:20 -- paths/export.sh@5 -- # export PATH 00:17:49.578 11:45:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.578 11:45:20 -- nvmf/common.sh@46 -- # : 0 00:17:49.578 11:45:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:49.578 11:45:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:49.578 11:45:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:49.578 11:45:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.578 11:45:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.578 11:45:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:49.578 11:45:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:49.578 11:45:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:49.578 11:45:20 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:49.578 11:45:20 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.578 11:45:20 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:49.578 11:45:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:49.578 11:45:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.578 11:45:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:49.578 11:45:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:49.578 11:45:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:49.578 11:45:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.578 11:45:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.578 11:45:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.578 11:45:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:49.578 11:45:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:49.578 11:45:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:49.578 11:45:20 -- common/autotest_common.sh@10 -- # set +x 00:17:57.698 11:45:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:57.698 11:45:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:57.698 11:45:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:57.698 11:45:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:57.698 11:45:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:57.698 11:45:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:57.698 11:45:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:57.698 11:45:26 -- nvmf/common.sh@294 -- # net_devs=() 00:17:57.698 11:45:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:57.698 11:45:26 -- nvmf/common.sh@295 -- # e810=() 00:17:57.698 11:45:26 -- nvmf/common.sh@295 -- # local -ga e810 00:17:57.698 11:45:26 -- nvmf/common.sh@296 -- # x722=() 00:17:57.698 11:45:26 -- nvmf/common.sh@296 -- # local -ga x722 00:17:57.698 11:45:26 -- nvmf/common.sh@297 -- # mlx=() 00:17:57.698 11:45:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:57.698 11:45:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.698 11:45:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:57.698 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:57.698 11:45:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.698 11:45:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:57.698 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:57.698 11:45:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.698 11:45:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.698 11:45:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.698 11:45:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:57.698 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:57.698 11:45:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.698 11:45:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.698 11:45:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:57.698 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:57.698 11:45:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.698 11:45:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:57.698 11:45:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:57.698 11:45:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:57.698 11:45:26 -- nvmf/common.sh@57 -- # uname 00:17:57.698 11:45:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:57.698 11:45:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:57.698 11:45:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:57.698 11:45:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:57.698 11:45:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:57.698 11:45:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:57.698 11:45:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:57.698 11:45:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:57.698 11:45:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:57.698 11:45:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:57.698 11:45:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:57.698 11:45:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.698 11:45:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:57.698 11:45:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:57.698 11:45:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.698 11:45:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:57.698 11:45:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:57.698 11:45:26 -- nvmf/common.sh@104 -- # continue 2 00:17:57.698 11:45:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.698 11:45:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.698 11:45:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:57.698 11:45:26 -- nvmf/common.sh@104 -- # continue 2 00:17:57.698 11:45:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:57.698 11:45:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:57.698 11:45:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:57.698 11:45:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:57.698 11:45:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:57.698 11:45:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:57.698 11:45:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:57.698 11:45:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:57.698 11:45:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:57.698 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.698 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:57.698 altname enp217s0f0np0 00:17:57.698 altname ens818f0np0 00:17:57.698 inet 192.168.100.8/24 scope global mlx_0_0 00:17:57.698 valid_lft forever preferred_lft forever 00:17:57.698 11:45:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:57.698 11:45:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:57.698 11:45:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:57.698 11:45:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:57.698 11:45:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:57.698 11:45:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:57.698 11:45:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:57.698 11:45:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:57.698 11:45:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:57.698 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.698 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:57.698 altname enp217s0f1np1 00:17:57.698 altname ens818f1np1 00:17:57.698 inet 192.168.100.9/24 scope global mlx_0_1 00:17:57.698 valid_lft forever preferred_lft forever 00:17:57.698 11:45:27 -- nvmf/common.sh@410 -- # return 0 00:17:57.698 11:45:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.698 11:45:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:57.698 11:45:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:57.698 11:45:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:57.698 11:45:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:57.698 11:45:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.698 11:45:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:57.698 11:45:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:57.699 11:45:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.699 11:45:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:57.699 11:45:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:57.699 11:45:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.699 11:45:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.699 11:45:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:57.699 11:45:27 -- nvmf/common.sh@104 -- # continue 2 00:17:57.699 11:45:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:57.699 11:45:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.699 11:45:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.699 11:45:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.699 11:45:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.699 11:45:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:57.699 11:45:27 -- nvmf/common.sh@104 -- # continue 2 00:17:57.699 11:45:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:57.699 11:45:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:57.699 11:45:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:57.699 11:45:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:57.699 11:45:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:57.699 11:45:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:57.699 11:45:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:57.699 11:45:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:57.699 192.168.100.9' 00:17:57.699 11:45:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:57.699 192.168.100.9' 00:17:57.699 11:45:27 -- nvmf/common.sh@445 -- # head -n 1 00:17:57.699 11:45:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:57.699 11:45:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:57.699 192.168.100.9' 00:17:57.699 11:45:27 -- nvmf/common.sh@446 -- # tail -n +2 00:17:57.699 11:45:27 -- nvmf/common.sh@446 -- # head -n 1 00:17:57.699 11:45:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:57.699 11:45:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:57.699 11:45:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:57.699 11:45:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:57.699 11:45:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:57.699 11:45:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:57.699 11:45:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:57.699 11:45:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.699 11:45:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.699 11:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 11:45:27 -- nvmf/common.sh@469 -- # nvmfpid=3743744 00:17:57.699 11:45:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:57.699 11:45:27 -- nvmf/common.sh@470 -- # waitforlisten 3743744 00:17:57.699 11:45:27 -- common/autotest_common.sh@829 -- # '[' -z 3743744 ']' 00:17:57.699 11:45:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.699 11:45:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.699 11:45:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.699 11:45:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.699 11:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 [2024-12-03 11:45:27.199199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:57.699 [2024-12-03 11:45:27.199245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.699 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.699 [2024-12-03 11:45:27.267757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.699 [2024-12-03 11:45:27.338181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.699 [2024-12-03 11:45:27.338293] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.699 [2024-12-03 11:45:27.338303] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.699 [2024-12-03 11:45:27.338312] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.699 [2024-12-03 11:45:27.338334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.699 11:45:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.699 11:45:28 -- common/autotest_common.sh@862 -- # return 0 00:17:57.699 11:45:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.699 11:45:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.699 11:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 11:45:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:57.699 [2024-12-03 11:45:28.227530] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c53f30/0x1c58420) succeed. 00:17:57.699 [2024-12-03 11:45:28.236444] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c55430/0x1c99ac0) succeed. 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:57.699 11:45:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:57.699 11:45:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:57.699 11:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 ************************************ 00:17:57.699 START TEST lvs_grow_clean 00:17:57.699 ************************************ 00:17:57.699 11:45:28 -- common/autotest_common.sh@1114 -- # lvs_grow 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:57.699 11:45:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:57.958 11:45:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:57.958 11:45:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:57.958 11:45:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:57.958 11:45:28 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:58.217 11:45:28 -- target/nvmf_lvs_grow.sh@28 -- # lvs=55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:17:58.217 11:45:28 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:17:58.217 11:45:28 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:58.475 11:45:28 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:58.475 11:45:28 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:58.475 11:45:28 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae lvol 150 00:17:58.475 11:45:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f9bf56c-fffa-45c7-adca-4bef441f6212 00:17:58.475 11:45:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.475 11:45:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:58.733 [2024-12-03 11:45:29.188641] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:58.733 [2024-12-03 11:45:29.188695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:58.733 true 00:17:58.733 11:45:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:17:58.733 11:45:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:58.991 11:45:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:58.991 11:45:29 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:58.991 11:45:29 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f9bf56c-fffa-45c7-adca-4bef441f6212 00:17:59.250 11:45:29 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:59.508 [2024-12-03 11:45:29.878895] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:59.508 11:45:29 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:59.508 11:45:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3744229 00:17:59.508 11:45:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.508 11:45:30 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:59.508 11:45:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3744229 /var/tmp/bdevperf.sock 00:17:59.508 11:45:30 -- common/autotest_common.sh@829 -- # '[' -z 3744229 ']' 00:17:59.508 11:45:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.508 11:45:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.508 11:45:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.508 11:45:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.508 11:45:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.508 [2024-12-03 11:45:30.085883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.508 [2024-12-03 11:45:30.085941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744229 ] 00:17:59.508 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.766 [2024-12-03 11:45:30.153070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.766 [2024-12-03 11:45:30.221586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.331 11:45:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.331 11:45:30 -- common/autotest_common.sh@862 -- # return 0 00:18:00.331 11:45:30 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:00.589 Nvme0n1 00:18:00.589 11:45:31 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:00.847 [ 00:18:00.848 { 00:18:00.848 "name": "Nvme0n1", 00:18:00.848 "aliases": [ 00:18:00.848 "4f9bf56c-fffa-45c7-adca-4bef441f6212" 00:18:00.848 ], 00:18:00.848 "product_name": "NVMe disk", 00:18:00.848 "block_size": 4096, 00:18:00.848 "num_blocks": 38912, 00:18:00.848 "uuid": "4f9bf56c-fffa-45c7-adca-4bef441f6212", 00:18:00.848 "assigned_rate_limits": { 00:18:00.848 "rw_ios_per_sec": 0, 00:18:00.848 "rw_mbytes_per_sec": 0, 00:18:00.848 "r_mbytes_per_sec": 0, 00:18:00.848 "w_mbytes_per_sec": 0 00:18:00.848 }, 00:18:00.848 "claimed": false, 00:18:00.848 "zoned": false, 00:18:00.848 "supported_io_types": { 00:18:00.848 "read": true, 00:18:00.848 "write": true, 00:18:00.848 "unmap": true, 00:18:00.848 "write_zeroes": true, 00:18:00.848 "flush": true, 00:18:00.848 "reset": true, 00:18:00.848 "compare": true, 00:18:00.848 "compare_and_write": true, 00:18:00.848 "abort": true, 00:18:00.848 "nvme_admin": true, 00:18:00.848 "nvme_io": true 00:18:00.848 }, 00:18:00.848 "memory_domains": [ 00:18:00.848 { 00:18:00.848 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:00.848 "dma_device_type": 0 00:18:00.848 } 00:18:00.848 ], 00:18:00.848 "driver_specific": { 00:18:00.848 "nvme": [ 00:18:00.848 { 00:18:00.848 "trid": { 00:18:00.848 "trtype": "RDMA", 00:18:00.848 "adrfam": "IPv4", 00:18:00.848 "traddr": "192.168.100.8", 00:18:00.848 "trsvcid": "4420", 00:18:00.848 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:00.848 }, 00:18:00.848 "ctrlr_data": { 00:18:00.848 "cntlid": 1, 00:18:00.848 "vendor_id": "0x8086", 00:18:00.848 "model_number": "SPDK bdev Controller", 00:18:00.848 "serial_number": "SPDK0", 00:18:00.848 "firmware_revision": "24.01.1", 00:18:00.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:00.848 "oacs": { 00:18:00.848 "security": 0, 00:18:00.848 "format": 0, 00:18:00.848 "firmware": 0, 00:18:00.848 "ns_manage": 0 00:18:00.848 }, 00:18:00.848 "multi_ctrlr": true, 00:18:00.848 "ana_reporting": false 00:18:00.848 }, 00:18:00.848 "vs": { 00:18:00.848 "nvme_version": "1.3" 00:18:00.848 }, 00:18:00.848 "ns_data": { 00:18:00.848 "id": 1, 00:18:00.848 "can_share": true 00:18:00.848 } 00:18:00.848 } 00:18:00.848 ], 00:18:00.848 "mp_policy": "active_passive" 00:18:00.848 } 00:18:00.848 } 00:18:00.848 ] 00:18:00.848 11:45:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3744500 00:18:00.848 11:45:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:00.848 11:45:31 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.848 Running I/O for 10 seconds... 00:18:02.223 Latency(us) 00:18:02.223 [2024-12-03T10:45:32.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.223 [2024-12-03T10:45:32.837Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.223 Nvme0n1 : 1.00 36550.00 142.77 0.00 0.00 0.00 0.00 0.00 00:18:02.223 [2024-12-03T10:45:32.837Z] =================================================================================================================== 00:18:02.223 [2024-12-03T10:45:32.837Z] Total : 36550.00 142.77 0.00 0.00 0.00 0.00 0.00 00:18:02.223 00:18:02.790 11:45:33 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:03.048 [2024-12-03T10:45:33.662Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.048 Nvme0n1 : 2.00 36866.00 144.01 0.00 0.00 0.00 0.00 0.00 00:18:03.048 [2024-12-03T10:45:33.662Z] =================================================================================================================== 00:18:03.048 [2024-12-03T10:45:33.662Z] Total : 36866.00 144.01 0.00 0.00 0.00 0.00 0.00 00:18:03.048 00:18:03.048 true 00:18:03.048 11:45:33 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:03.048 11:45:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:03.306 11:45:33 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:03.306 11:45:33 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:03.306 11:45:33 -- target/nvmf_lvs_grow.sh@65 -- # wait 3744500 00:18:03.872 [2024-12-03T10:45:34.486Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.872 Nvme0n1 : 3.00 36948.67 144.33 0.00 0.00 0.00 0.00 0.00 00:18:03.872 [2024-12-03T10:45:34.486Z] =================================================================================================================== 00:18:03.872 [2024-12-03T10:45:34.486Z] Total : 36948.67 144.33 0.00 0.00 0.00 0.00 0.00 00:18:03.872 00:18:05.245 [2024-12-03T10:45:35.859Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.245 Nvme0n1 : 4.00 37073.00 144.82 0.00 0.00 0.00 0.00 0.00 00:18:05.245 [2024-12-03T10:45:35.860Z] =================================================================================================================== 00:18:05.246 [2024-12-03T10:45:35.860Z] Total : 37073.00 144.82 0.00 0.00 0.00 0.00 0.00 00:18:05.246 00:18:06.182 [2024-12-03T10:45:36.796Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.182 Nvme0n1 : 5.00 37170.60 145.20 0.00 0.00 0.00 0.00 0.00 00:18:06.182 [2024-12-03T10:45:36.796Z] =================================================================================================================== 00:18:06.182 [2024-12-03T10:45:36.796Z] Total : 37170.60 145.20 0.00 0.00 0.00 0.00 0.00 00:18:06.182 00:18:07.116 [2024-12-03T10:45:37.730Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.116 Nvme0n1 : 6.00 37211.00 145.36 0.00 0.00 0.00 0.00 0.00 00:18:07.116 [2024-12-03T10:45:37.730Z] =================================================================================================================== 00:18:07.116 [2024-12-03T10:45:37.730Z] Total : 37211.00 145.36 0.00 0.00 0.00 0.00 0.00 00:18:07.116 00:18:08.051 [2024-12-03T10:45:38.665Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.051 Nvme0n1 : 7.00 37266.00 145.57 0.00 0.00 0.00 0.00 0.00 00:18:08.051 [2024-12-03T10:45:38.665Z] =================================================================================================================== 00:18:08.051 [2024-12-03T10:45:38.665Z] Total : 37266.00 145.57 0.00 0.00 0.00 0.00 0.00 00:18:08.051 00:18:08.986 [2024-12-03T10:45:39.600Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.986 Nvme0n1 : 8.00 37311.75 145.75 0.00 0.00 0.00 0.00 0.00 00:18:08.986 [2024-12-03T10:45:39.600Z] =================================================================================================================== 00:18:08.986 [2024-12-03T10:45:39.600Z] Total : 37311.75 145.75 0.00 0.00 0.00 0.00 0.00 00:18:08.986 00:18:09.916 [2024-12-03T10:45:40.530Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.917 Nvme0n1 : 9.00 37283.44 145.64 0.00 0.00 0.00 0.00 0.00 00:18:09.917 [2024-12-03T10:45:40.531Z] =================================================================================================================== 00:18:09.917 [2024-12-03T10:45:40.531Z] Total : 37283.44 145.64 0.00 0.00 0.00 0.00 0.00 00:18:09.917 00:18:10.850 [2024-12-03T10:45:41.464Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.850 Nvme0n1 : 10.00 37305.20 145.72 0.00 0.00 0.00 0.00 0.00 00:18:10.850 [2024-12-03T10:45:41.464Z] =================================================================================================================== 00:18:10.850 [2024-12-03T10:45:41.464Z] Total : 37305.20 145.72 0.00 0.00 0.00 0.00 0.00 00:18:10.850 00:18:10.850 00:18:10.850 Latency(us) 00:18:10.850 [2024-12-03T10:45:41.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.850 [2024-12-03T10:45:41.464Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.850 Nvme0n1 : 10.00 37305.69 145.73 0.00 0.00 3428.41 2555.90 7916.75 00:18:10.850 [2024-12-03T10:45:41.464Z] =================================================================================================================== 00:18:10.850 [2024-12-03T10:45:41.464Z] Total : 37305.69 145.73 0.00 0.00 3428.41 2555.90 7916.75 00:18:10.850 0 00:18:11.109 11:45:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3744229 00:18:11.109 11:45:41 -- common/autotest_common.sh@936 -- # '[' -z 3744229 ']' 00:18:11.109 11:45:41 -- common/autotest_common.sh@940 -- # kill -0 3744229 00:18:11.109 11:45:41 -- common/autotest_common.sh@941 -- # uname 00:18:11.109 11:45:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.109 11:45:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3744229 00:18:11.109 11:45:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:11.109 11:45:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:11.109 11:45:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3744229' 00:18:11.109 killing process with pid 3744229 00:18:11.109 11:45:41 -- common/autotest_common.sh@955 -- # kill 3744229 00:18:11.109 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.109 00:18:11.109 Latency(us) 00:18:11.109 [2024-12-03T10:45:41.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.109 [2024-12-03T10:45:41.723Z] =================================================================================================================== 00:18:11.109 [2024-12-03T10:45:41.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.109 11:45:41 -- common/autotest_common.sh@960 -- # wait 3744229 00:18:11.368 11:45:41 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:11.368 11:45:41 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:11.368 11:45:41 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:11.626 11:45:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:11.626 11:45:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:11.626 11:45:42 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:11.884 [2024-12-03 11:45:42.248879] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:11.884 11:45:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:11.884 11:45:42 -- common/autotest_common.sh@650 -- # local es=0 00:18:11.884 11:45:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:11.884 11:45:42 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:11.884 11:45:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.885 11:45:42 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:11.885 11:45:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.885 11:45:42 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:11.885 11:45:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.885 11:45:42 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:11.885 11:45:42 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:11.885 11:45:42 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:11.885 request: 00:18:11.885 { 00:18:11.885 "uuid": "55d7d656-1763-4474-95eb-ae1ec6baa6ae", 00:18:11.885 "method": "bdev_lvol_get_lvstores", 00:18:11.885 "req_id": 1 00:18:11.885 } 00:18:11.885 Got JSON-RPC error response 00:18:11.885 response: 00:18:11.885 { 00:18:11.885 "code": -19, 00:18:11.885 "message": "No such device" 00:18:11.885 } 00:18:11.885 11:45:42 -- common/autotest_common.sh@653 -- # es=1 00:18:11.885 11:45:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.885 11:45:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.885 11:45:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.885 11:45:42 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:12.142 aio_bdev 00:18:12.142 11:45:42 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4f9bf56c-fffa-45c7-adca-4bef441f6212 00:18:12.142 11:45:42 -- common/autotest_common.sh@897 -- # local bdev_name=4f9bf56c-fffa-45c7-adca-4bef441f6212 00:18:12.142 11:45:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.142 11:45:42 -- common/autotest_common.sh@899 -- # local i 00:18:12.142 11:45:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.142 11:45:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.142 11:45:42 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:12.400 11:45:42 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f9bf56c-fffa-45c7-adca-4bef441f6212 -t 2000 00:18:12.400 [ 00:18:12.400 { 00:18:12.400 "name": "4f9bf56c-fffa-45c7-adca-4bef441f6212", 00:18:12.400 "aliases": [ 00:18:12.400 "lvs/lvol" 00:18:12.400 ], 00:18:12.400 "product_name": "Logical Volume", 00:18:12.400 "block_size": 4096, 00:18:12.400 "num_blocks": 38912, 00:18:12.400 "uuid": "4f9bf56c-fffa-45c7-adca-4bef441f6212", 00:18:12.400 "assigned_rate_limits": { 00:18:12.400 "rw_ios_per_sec": 0, 00:18:12.400 "rw_mbytes_per_sec": 0, 00:18:12.400 "r_mbytes_per_sec": 0, 00:18:12.400 "w_mbytes_per_sec": 0 00:18:12.400 }, 00:18:12.400 "claimed": false, 00:18:12.400 "zoned": false, 00:18:12.400 "supported_io_types": { 00:18:12.400 "read": true, 00:18:12.400 "write": true, 00:18:12.400 "unmap": true, 00:18:12.400 "write_zeroes": true, 00:18:12.400 "flush": false, 00:18:12.400 "reset": true, 00:18:12.400 "compare": false, 00:18:12.400 "compare_and_write": false, 00:18:12.400 "abort": false, 00:18:12.400 "nvme_admin": false, 00:18:12.400 "nvme_io": false 00:18:12.400 }, 00:18:12.400 "driver_specific": { 00:18:12.400 "lvol": { 00:18:12.400 "lvol_store_uuid": "55d7d656-1763-4474-95eb-ae1ec6baa6ae", 00:18:12.400 "base_bdev": "aio_bdev", 00:18:12.400 "thin_provision": false, 00:18:12.400 "snapshot": false, 00:18:12.400 "clone": false, 00:18:12.400 "esnap_clone": false 00:18:12.400 } 00:18:12.400 } 00:18:12.400 } 00:18:12.400 ] 00:18:12.400 11:45:42 -- common/autotest_common.sh@905 -- # return 0 00:18:12.400 11:45:42 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:12.400 11:45:42 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:12.658 11:45:43 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:12.658 11:45:43 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:12.658 11:45:43 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:12.916 11:45:43 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:12.916 11:45:43 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f9bf56c-fffa-45c7-adca-4bef441f6212 00:18:12.916 11:45:43 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55d7d656-1763-4474-95eb-ae1ec6baa6ae 00:18:13.174 11:45:43 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:13.432 00:18:13.432 real 0m15.601s 00:18:13.432 user 0m15.582s 00:18:13.432 sys 0m1.123s 00:18:13.432 11:45:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:13.432 11:45:43 -- common/autotest_common.sh@10 -- # set +x 00:18:13.432 ************************************ 00:18:13.432 END TEST lvs_grow_clean 00:18:13.432 ************************************ 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:13.432 11:45:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.432 11:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.432 11:45:43 -- common/autotest_common.sh@10 -- # set +x 00:18:13.432 ************************************ 00:18:13.432 START TEST lvs_grow_dirty 00:18:13.432 ************************************ 00:18:13.432 11:45:43 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:13.432 11:45:43 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:13.690 11:45:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:13.690 11:45:44 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:13.948 11:45:44 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 lvol 150 00:18:14.206 11:45:44 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:14.206 11:45:44 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:14.206 11:45:44 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:14.464 [2024-12-03 11:45:44.824775] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:14.464 [2024-12-03 11:45:44.824828] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:14.464 true 00:18:14.464 11:45:44 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:14.464 11:45:44 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:14.464 11:45:45 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:14.464 11:45:45 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:14.720 11:45:45 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:14.977 11:45:45 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:14.977 11:45:45 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:15.232 11:45:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3746987 00:18:15.232 11:45:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.232 11:45:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3746987 /var/tmp/bdevperf.sock 00:18:15.232 11:45:45 -- common/autotest_common.sh@829 -- # '[' -z 3746987 ']' 00:18:15.232 11:45:45 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:15.232 11:45:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.232 11:45:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.232 11:45:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.232 11:45:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.232 11:45:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.232 [2024-12-03 11:45:45.746596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:15.232 [2024-12-03 11:45:45.746652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746987 ] 00:18:15.232 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.232 [2024-12-03 11:45:45.816986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.489 [2024-12-03 11:45:45.890741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.066 11:45:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.066 11:45:46 -- common/autotest_common.sh@862 -- # return 0 00:18:16.066 11:45:46 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:16.368 Nvme0n1 00:18:16.368 11:45:46 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:16.626 [ 00:18:16.626 { 00:18:16.626 "name": "Nvme0n1", 00:18:16.626 "aliases": [ 00:18:16.626 "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7" 00:18:16.626 ], 00:18:16.626 "product_name": "NVMe disk", 00:18:16.626 "block_size": 4096, 00:18:16.626 "num_blocks": 38912, 00:18:16.626 "uuid": "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7", 00:18:16.626 "assigned_rate_limits": { 00:18:16.626 "rw_ios_per_sec": 0, 00:18:16.626 "rw_mbytes_per_sec": 0, 00:18:16.626 "r_mbytes_per_sec": 0, 00:18:16.626 "w_mbytes_per_sec": 0 00:18:16.626 }, 00:18:16.626 "claimed": false, 00:18:16.626 "zoned": false, 00:18:16.626 "supported_io_types": { 00:18:16.626 "read": true, 00:18:16.626 "write": true, 00:18:16.626 "unmap": true, 00:18:16.626 "write_zeroes": true, 00:18:16.626 "flush": true, 00:18:16.626 "reset": true, 00:18:16.626 "compare": true, 00:18:16.626 "compare_and_write": true, 00:18:16.626 "abort": true, 00:18:16.626 "nvme_admin": true, 00:18:16.626 "nvme_io": true 00:18:16.626 }, 00:18:16.626 "memory_domains": [ 00:18:16.626 { 00:18:16.626 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:16.626 "dma_device_type": 0 00:18:16.626 } 00:18:16.626 ], 00:18:16.626 "driver_specific": { 00:18:16.626 "nvme": [ 00:18:16.626 { 00:18:16.626 "trid": { 00:18:16.626 "trtype": "RDMA", 00:18:16.626 "adrfam": "IPv4", 00:18:16.626 "traddr": "192.168.100.8", 00:18:16.626 "trsvcid": "4420", 00:18:16.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:16.626 }, 00:18:16.626 "ctrlr_data": { 00:18:16.626 "cntlid": 1, 00:18:16.626 "vendor_id": "0x8086", 00:18:16.626 "model_number": "SPDK bdev Controller", 00:18:16.626 "serial_number": "SPDK0", 00:18:16.626 "firmware_revision": "24.01.1", 00:18:16.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:16.626 "oacs": { 00:18:16.626 "security": 0, 00:18:16.626 "format": 0, 00:18:16.626 "firmware": 0, 00:18:16.626 "ns_manage": 0 00:18:16.626 }, 00:18:16.626 "multi_ctrlr": true, 00:18:16.626 "ana_reporting": false 00:18:16.626 }, 00:18:16.626 "vs": { 00:18:16.626 "nvme_version": "1.3" 00:18:16.626 }, 00:18:16.626 "ns_data": { 00:18:16.626 "id": 1, 00:18:16.626 "can_share": true 00:18:16.626 } 00:18:16.626 } 00:18:16.626 ], 00:18:16.626 "mp_policy": "active_passive" 00:18:16.626 } 00:18:16.626 } 00:18:16.626 ] 00:18:16.626 11:45:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3747265 00:18:16.626 11:45:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:16.626 11:45:46 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:16.626 Running I/O for 10 seconds... 00:18:17.561 Latency(us) 00:18:17.561 [2024-12-03T10:45:48.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.561 [2024-12-03T10:45:48.175Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.561 Nvme0n1 : 1.00 36353.00 142.00 0.00 0.00 0.00 0.00 0.00 00:18:17.561 [2024-12-03T10:45:48.175Z] =================================================================================================================== 00:18:17.561 [2024-12-03T10:45:48.175Z] Total : 36353.00 142.00 0.00 0.00 0.00 0.00 0.00 00:18:17.561 00:18:18.496 11:45:49 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:18.496 [2024-12-03T10:45:49.110Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.496 Nvme0n1 : 2.00 36800.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:18.496 [2024-12-03T10:45:49.110Z] =================================================================================================================== 00:18:18.496 [2024-12-03T10:45:49.110Z] Total : 36800.00 143.75 0.00 0.00 0.00 0.00 0.00 00:18:18.496 00:18:18.754 true 00:18:18.754 11:45:49 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:18.754 11:45:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:18.754 11:45:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:18.754 11:45:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:18.754 11:45:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 3747265 00:18:19.688 [2024-12-03T10:45:50.302Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.688 Nvme0n1 : 3.00 36864.67 144.00 0.00 0.00 0.00 0.00 0.00 00:18:19.688 [2024-12-03T10:45:50.302Z] =================================================================================================================== 00:18:19.688 [2024-12-03T10:45:50.302Z] Total : 36864.67 144.00 0.00 0.00 0.00 0.00 0.00 00:18:19.688 00:18:20.623 [2024-12-03T10:45:51.237Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.623 Nvme0n1 : 4.00 36912.25 144.19 0.00 0.00 0.00 0.00 0.00 00:18:20.623 [2024-12-03T10:45:51.237Z] =================================================================================================================== 00:18:20.623 [2024-12-03T10:45:51.237Z] Total : 36912.25 144.19 0.00 0.00 0.00 0.00 0.00 00:18:20.623 00:18:21.557 [2024-12-03T10:45:52.171Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.557 Nvme0n1 : 5.00 37037.20 144.68 0.00 0.00 0.00 0.00 0.00 00:18:21.557 [2024-12-03T10:45:52.171Z] =================================================================================================================== 00:18:21.557 [2024-12-03T10:45:52.171Z] Total : 37037.20 144.68 0.00 0.00 0.00 0.00 0.00 00:18:21.557 00:18:22.490 [2024-12-03T10:45:53.104Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.490 Nvme0n1 : 6.00 37130.50 145.04 0.00 0.00 0.00 0.00 0.00 00:18:22.490 [2024-12-03T10:45:53.104Z] =================================================================================================================== 00:18:22.490 [2024-12-03T10:45:53.104Z] Total : 37130.50 145.04 0.00 0.00 0.00 0.00 0.00 00:18:22.490 00:18:23.863 [2024-12-03T10:45:54.477Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.863 Nvme0n1 : 7.00 37197.57 145.30 0.00 0.00 0.00 0.00 0.00 00:18:23.863 [2024-12-03T10:45:54.477Z] =================================================================================================================== 00:18:23.863 [2024-12-03T10:45:54.477Z] Total : 37197.57 145.30 0.00 0.00 0.00 0.00 0.00 00:18:23.863 00:18:24.796 [2024-12-03T10:45:55.410Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.796 Nvme0n1 : 8.00 37240.38 145.47 0.00 0.00 0.00 0.00 0.00 00:18:24.796 [2024-12-03T10:45:55.410Z] =================================================================================================================== 00:18:24.796 [2024-12-03T10:45:55.410Z] Total : 37240.38 145.47 0.00 0.00 0.00 0.00 0.00 00:18:24.796 00:18:25.726 [2024-12-03T10:45:56.340Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.726 Nvme0n1 : 9.00 37280.00 145.62 0.00 0.00 0.00 0.00 0.00 00:18:25.726 [2024-12-03T10:45:56.340Z] =================================================================================================================== 00:18:25.726 [2024-12-03T10:45:56.340Z] Total : 37280.00 145.62 0.00 0.00 0.00 0.00 0.00 00:18:25.726 00:18:26.655 [2024-12-03T10:45:57.269Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.655 Nvme0n1 : 10.00 37318.40 145.78 0.00 0.00 0.00 0.00 0.00 00:18:26.655 [2024-12-03T10:45:57.269Z] =================================================================================================================== 00:18:26.655 [2024-12-03T10:45:57.269Z] Total : 37318.40 145.78 0.00 0.00 0.00 0.00 0.00 00:18:26.655 00:18:26.655 00:18:26.655 Latency(us) 00:18:26.655 [2024-12-03T10:45:57.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.655 [2024-12-03T10:45:57.269Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.655 Nvme0n1 : 10.00 37318.82 145.78 0.00 0.00 3427.09 2451.05 13631.49 00:18:26.655 [2024-12-03T10:45:57.269Z] =================================================================================================================== 00:18:26.655 [2024-12-03T10:45:57.269Z] Total : 37318.82 145.78 0.00 0.00 3427.09 2451.05 13631.49 00:18:26.655 0 00:18:26.655 11:45:57 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3746987 00:18:26.655 11:45:57 -- common/autotest_common.sh@936 -- # '[' -z 3746987 ']' 00:18:26.655 11:45:57 -- common/autotest_common.sh@940 -- # kill -0 3746987 00:18:26.655 11:45:57 -- common/autotest_common.sh@941 -- # uname 00:18:26.655 11:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.655 11:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3746987 00:18:26.656 11:45:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:26.656 11:45:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:26.656 11:45:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3746987' 00:18:26.656 killing process with pid 3746987 00:18:26.656 11:45:57 -- common/autotest_common.sh@955 -- # kill 3746987 00:18:26.656 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.656 00:18:26.656 Latency(us) 00:18:26.656 [2024-12-03T10:45:57.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.656 [2024-12-03T10:45:57.270Z] =================================================================================================================== 00:18:26.656 [2024-12-03T10:45:57.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.656 11:45:57 -- common/autotest_common.sh@960 -- # wait 3746987 00:18:26.914 11:45:57 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:27.172 11:45:57 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:27.172 11:45:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3743744 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@74 -- # wait 3743744 00:18:27.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3743744 Killed "${NVMF_APP[@]}" "$@" 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:27.431 11:45:57 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:27.431 11:45:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.431 11:45:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.431 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.431 11:45:57 -- nvmf/common.sh@469 -- # nvmfpid=3749160 00:18:27.431 11:45:57 -- nvmf/common.sh@470 -- # waitforlisten 3749160 00:18:27.431 11:45:57 -- common/autotest_common.sh@829 -- # '[' -z 3749160 ']' 00:18:27.431 11:45:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.431 11:45:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.431 11:45:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.431 11:45:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.431 11:45:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.431 11:45:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.431 [2024-12-03 11:45:57.877219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:27.431 [2024-12-03 11:45:57.877277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.431 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.431 [2024-12-03 11:45:57.947636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.431 [2024-12-03 11:45:58.019316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.431 [2024-12-03 11:45:58.019420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.431 [2024-12-03 11:45:58.019430] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.431 [2024-12-03 11:45:58.019439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.431 [2024-12-03 11:45:58.019457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.364 11:45:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.364 11:45:58 -- common/autotest_common.sh@862 -- # return 0 00:18:28.364 11:45:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.364 11:45:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.364 11:45:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.364 11:45:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.364 11:45:58 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:28.364 [2024-12-03 11:45:58.888104] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:28.364 [2024-12-03 11:45:58.888219] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:28.364 [2024-12-03 11:45:58.888246] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:28.364 11:45:58 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:28.364 11:45:58 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:28.364 11:45:58 -- common/autotest_common.sh@897 -- # local bdev_name=5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:28.364 11:45:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:28.364 11:45:58 -- common/autotest_common.sh@899 -- # local i 00:18:28.364 11:45:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:28.364 11:45:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:28.364 11:45:58 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:28.621 11:45:59 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 -t 2000 00:18:28.879 [ 00:18:28.879 { 00:18:28.879 "name": "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7", 00:18:28.879 "aliases": [ 00:18:28.879 "lvs/lvol" 00:18:28.879 ], 00:18:28.879 "product_name": "Logical Volume", 00:18:28.879 "block_size": 4096, 00:18:28.879 "num_blocks": 38912, 00:18:28.879 "uuid": "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7", 00:18:28.879 "assigned_rate_limits": { 00:18:28.879 "rw_ios_per_sec": 0, 00:18:28.879 "rw_mbytes_per_sec": 0, 00:18:28.879 "r_mbytes_per_sec": 0, 00:18:28.879 "w_mbytes_per_sec": 0 00:18:28.879 }, 00:18:28.879 "claimed": false, 00:18:28.879 "zoned": false, 00:18:28.879 "supported_io_types": { 00:18:28.879 "read": true, 00:18:28.879 "write": true, 00:18:28.879 "unmap": true, 00:18:28.879 "write_zeroes": true, 00:18:28.879 "flush": false, 00:18:28.879 "reset": true, 00:18:28.879 "compare": false, 00:18:28.879 "compare_and_write": false, 00:18:28.879 "abort": false, 00:18:28.879 "nvme_admin": false, 00:18:28.879 "nvme_io": false 00:18:28.879 }, 00:18:28.879 "driver_specific": { 00:18:28.879 "lvol": { 00:18:28.879 "lvol_store_uuid": "a689d6e8-f22f-43a0-b525-ca7d72a7bcc6", 00:18:28.879 "base_bdev": "aio_bdev", 00:18:28.879 "thin_provision": false, 00:18:28.879 "snapshot": false, 00:18:28.879 "clone": false, 00:18:28.879 "esnap_clone": false 00:18:28.879 } 00:18:28.879 } 00:18:28.879 } 00:18:28.879 ] 00:18:28.879 11:45:59 -- common/autotest_common.sh@905 -- # return 0 00:18:28.879 11:45:59 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:28.879 11:45:59 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:28.879 11:45:59 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:28.879 11:45:59 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:28.879 11:45:59 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:29.137 11:45:59 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:29.137 11:45:59 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:29.396 [2024-12-03 11:45:59.760511] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:29.396 11:45:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:29.396 11:45:59 -- common/autotest_common.sh@650 -- # local es=0 00:18:29.396 11:45:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:29.396 11:45:59 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:29.396 11:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.396 11:45:59 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:29.396 11:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.396 11:45:59 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:29.396 11:45:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.396 11:45:59 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:29.396 11:45:59 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:29.396 11:45:59 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:29.396 request: 00:18:29.396 { 00:18:29.396 "uuid": "a689d6e8-f22f-43a0-b525-ca7d72a7bcc6", 00:18:29.396 "method": "bdev_lvol_get_lvstores", 00:18:29.396 "req_id": 1 00:18:29.396 } 00:18:29.396 Got JSON-RPC error response 00:18:29.396 response: 00:18:29.396 { 00:18:29.396 "code": -19, 00:18:29.396 "message": "No such device" 00:18:29.396 } 00:18:29.396 11:45:59 -- common/autotest_common.sh@653 -- # es=1 00:18:29.396 11:45:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.396 11:45:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.396 11:45:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.396 11:45:59 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:29.655 aio_bdev 00:18:29.655 11:46:00 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:29.655 11:46:00 -- common/autotest_common.sh@897 -- # local bdev_name=5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:29.655 11:46:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:29.655 11:46:00 -- common/autotest_common.sh@899 -- # local i 00:18:29.655 11:46:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:29.655 11:46:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:29.655 11:46:00 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:29.913 11:46:00 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 -t 2000 00:18:29.913 [ 00:18:29.913 { 00:18:29.913 "name": "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7", 00:18:29.913 "aliases": [ 00:18:29.913 "lvs/lvol" 00:18:29.913 ], 00:18:29.913 "product_name": "Logical Volume", 00:18:29.913 "block_size": 4096, 00:18:29.913 "num_blocks": 38912, 00:18:29.913 "uuid": "5494c8eb-b68d-41a0-8639-bd8a5b1e90b7", 00:18:29.913 "assigned_rate_limits": { 00:18:29.913 "rw_ios_per_sec": 0, 00:18:29.913 "rw_mbytes_per_sec": 0, 00:18:29.913 "r_mbytes_per_sec": 0, 00:18:29.913 "w_mbytes_per_sec": 0 00:18:29.913 }, 00:18:29.913 "claimed": false, 00:18:29.913 "zoned": false, 00:18:29.913 "supported_io_types": { 00:18:29.913 "read": true, 00:18:29.913 "write": true, 00:18:29.913 "unmap": true, 00:18:29.913 "write_zeroes": true, 00:18:29.913 "flush": false, 00:18:29.913 "reset": true, 00:18:29.913 "compare": false, 00:18:29.913 "compare_and_write": false, 00:18:29.913 "abort": false, 00:18:29.913 "nvme_admin": false, 00:18:29.913 "nvme_io": false 00:18:29.913 }, 00:18:29.913 "driver_specific": { 00:18:29.913 "lvol": { 00:18:29.913 "lvol_store_uuid": "a689d6e8-f22f-43a0-b525-ca7d72a7bcc6", 00:18:29.913 "base_bdev": "aio_bdev", 00:18:29.913 "thin_provision": false, 00:18:29.913 "snapshot": false, 00:18:29.913 "clone": false, 00:18:29.913 "esnap_clone": false 00:18:29.913 } 00:18:29.913 } 00:18:29.913 } 00:18:29.913 ] 00:18:29.913 11:46:00 -- common/autotest_common.sh@905 -- # return 0 00:18:29.913 11:46:00 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:29.913 11:46:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:30.171 11:46:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:30.171 11:46:00 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:30.171 11:46:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:30.428 11:46:00 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:30.428 11:46:00 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5494c8eb-b68d-41a0-8639-bd8a5b1e90b7 00:18:30.428 11:46:01 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a689d6e8-f22f-43a0-b525-ca7d72a7bcc6 00:18:30.686 11:46:01 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:30.944 11:46:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:30.944 00:18:30.944 real 0m17.454s 00:18:30.944 user 0m45.274s 00:18:30.944 sys 0m3.161s 00:18:30.944 11:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:30.944 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:18:30.944 ************************************ 00:18:30.944 END TEST lvs_grow_dirty 00:18:30.944 ************************************ 00:18:30.944 11:46:01 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:30.944 11:46:01 -- common/autotest_common.sh@806 -- # type=--id 00:18:30.944 11:46:01 -- common/autotest_common.sh@807 -- # id=0 00:18:30.944 11:46:01 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:30.944 11:46:01 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:30.944 11:46:01 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:30.944 11:46:01 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:30.944 11:46:01 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:30.944 11:46:01 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:30.944 nvmf_trace.0 00:18:30.944 11:46:01 -- common/autotest_common.sh@821 -- # return 0 00:18:30.944 11:46:01 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:30.944 11:46:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.944 11:46:01 -- nvmf/common.sh@116 -- # sync 00:18:30.944 11:46:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:30.944 11:46:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:30.944 11:46:01 -- nvmf/common.sh@119 -- # set +e 00:18:30.944 11:46:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.944 11:46:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:30.944 rmmod nvme_rdma 00:18:30.944 rmmod nvme_fabrics 00:18:30.944 11:46:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.944 11:46:01 -- nvmf/common.sh@123 -- # set -e 00:18:30.944 11:46:01 -- nvmf/common.sh@124 -- # return 0 00:18:30.944 11:46:01 -- nvmf/common.sh@477 -- # '[' -n 3749160 ']' 00:18:30.944 11:46:01 -- nvmf/common.sh@478 -- # killprocess 3749160 00:18:30.944 11:46:01 -- common/autotest_common.sh@936 -- # '[' -z 3749160 ']' 00:18:30.944 11:46:01 -- common/autotest_common.sh@940 -- # kill -0 3749160 00:18:30.944 11:46:01 -- common/autotest_common.sh@941 -- # uname 00:18:30.944 11:46:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.944 11:46:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3749160 00:18:31.202 11:46:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.202 11:46:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.202 11:46:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3749160' 00:18:31.202 killing process with pid 3749160 00:18:31.202 11:46:01 -- common/autotest_common.sh@955 -- # kill 3749160 00:18:31.202 11:46:01 -- common/autotest_common.sh@960 -- # wait 3749160 00:18:31.202 11:46:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:31.202 11:46:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:31.202 00:18:31.202 real 0m41.869s 00:18:31.202 user 1m7.148s 00:18:31.202 sys 0m10.169s 00:18:31.202 11:46:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.202 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:18:31.202 ************************************ 00:18:31.202 END TEST nvmf_lvs_grow 00:18:31.202 ************************************ 00:18:31.461 11:46:01 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:31.461 11:46:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:31.461 11:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.461 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:18:31.461 ************************************ 00:18:31.461 START TEST nvmf_bdev_io_wait 00:18:31.461 ************************************ 00:18:31.461 11:46:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:31.461 * Looking for test storage... 00:18:31.461 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:31.461 11:46:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:31.461 11:46:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:31.461 11:46:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:31.461 11:46:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:31.461 11:46:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:31.461 11:46:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:31.461 11:46:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:31.461 11:46:02 -- scripts/common.sh@335 -- # IFS=.-: 00:18:31.461 11:46:02 -- scripts/common.sh@335 -- # read -ra ver1 00:18:31.461 11:46:02 -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.461 11:46:02 -- scripts/common.sh@336 -- # read -ra ver2 00:18:31.461 11:46:02 -- scripts/common.sh@337 -- # local 'op=<' 00:18:31.461 11:46:02 -- scripts/common.sh@339 -- # ver1_l=2 00:18:31.461 11:46:02 -- scripts/common.sh@340 -- # ver2_l=1 00:18:31.461 11:46:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:31.461 11:46:02 -- scripts/common.sh@343 -- # case "$op" in 00:18:31.461 11:46:02 -- scripts/common.sh@344 -- # : 1 00:18:31.461 11:46:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:31.461 11:46:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.461 11:46:02 -- scripts/common.sh@364 -- # decimal 1 00:18:31.461 11:46:02 -- scripts/common.sh@352 -- # local d=1 00:18:31.461 11:46:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.461 11:46:02 -- scripts/common.sh@354 -- # echo 1 00:18:31.461 11:46:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:31.461 11:46:02 -- scripts/common.sh@365 -- # decimal 2 00:18:31.461 11:46:02 -- scripts/common.sh@352 -- # local d=2 00:18:31.461 11:46:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.462 11:46:02 -- scripts/common.sh@354 -- # echo 2 00:18:31.462 11:46:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:31.462 11:46:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:31.462 11:46:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:31.462 11:46:02 -- scripts/common.sh@367 -- # return 0 00:18:31.462 11:46:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.462 11:46:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:31.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.462 --rc genhtml_branch_coverage=1 00:18:31.462 --rc genhtml_function_coverage=1 00:18:31.462 --rc genhtml_legend=1 00:18:31.462 --rc geninfo_all_blocks=1 00:18:31.462 --rc geninfo_unexecuted_blocks=1 00:18:31.462 00:18:31.462 ' 00:18:31.462 11:46:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:31.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.462 --rc genhtml_branch_coverage=1 00:18:31.462 --rc genhtml_function_coverage=1 00:18:31.462 --rc genhtml_legend=1 00:18:31.462 --rc geninfo_all_blocks=1 00:18:31.462 --rc geninfo_unexecuted_blocks=1 00:18:31.462 00:18:31.462 ' 00:18:31.462 11:46:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:31.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.462 --rc genhtml_branch_coverage=1 00:18:31.462 --rc genhtml_function_coverage=1 00:18:31.462 --rc genhtml_legend=1 00:18:31.462 --rc geninfo_all_blocks=1 00:18:31.462 --rc geninfo_unexecuted_blocks=1 00:18:31.462 00:18:31.462 ' 00:18:31.462 11:46:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:31.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.462 --rc genhtml_branch_coverage=1 00:18:31.462 --rc genhtml_function_coverage=1 00:18:31.462 --rc genhtml_legend=1 00:18:31.462 --rc geninfo_all_blocks=1 00:18:31.462 --rc geninfo_unexecuted_blocks=1 00:18:31.462 00:18:31.462 ' 00:18:31.462 11:46:02 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.462 11:46:02 -- nvmf/common.sh@7 -- # uname -s 00:18:31.462 11:46:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.462 11:46:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.462 11:46:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.462 11:46:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.462 11:46:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.462 11:46:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.462 11:46:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.462 11:46:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.462 11:46:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.462 11:46:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.462 11:46:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:31.462 11:46:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:31.462 11:46:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.462 11:46:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.462 11:46:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.462 11:46:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:31.462 11:46:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.462 11:46:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.462 11:46:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.462 11:46:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.462 11:46:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.462 11:46:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.462 11:46:02 -- paths/export.sh@5 -- # export PATH 00:18:31.462 11:46:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.462 11:46:02 -- nvmf/common.sh@46 -- # : 0 00:18:31.462 11:46:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.462 11:46:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.462 11:46:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.462 11:46:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.462 11:46:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.462 11:46:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.462 11:46:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.462 11:46:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.462 11:46:02 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.462 11:46:02 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.462 11:46:02 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:31.462 11:46:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:31.462 11:46:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.462 11:46:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.462 11:46:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.462 11:46:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.462 11:46:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.462 11:46:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.462 11:46:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.462 11:46:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.462 11:46:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.462 11:46:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.462 11:46:02 -- common/autotest_common.sh@10 -- # set +x 00:18:39.576 11:46:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:39.576 11:46:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:39.576 11:46:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:39.576 11:46:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:39.576 11:46:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:39.576 11:46:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:39.576 11:46:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:39.576 11:46:08 -- nvmf/common.sh@294 -- # net_devs=() 00:18:39.576 11:46:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:39.576 11:46:08 -- nvmf/common.sh@295 -- # e810=() 00:18:39.576 11:46:08 -- nvmf/common.sh@295 -- # local -ga e810 00:18:39.576 11:46:08 -- nvmf/common.sh@296 -- # x722=() 00:18:39.576 11:46:08 -- nvmf/common.sh@296 -- # local -ga x722 00:18:39.576 11:46:08 -- nvmf/common.sh@297 -- # mlx=() 00:18:39.576 11:46:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:39.576 11:46:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.576 11:46:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:39.576 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:39.576 11:46:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:39.576 11:46:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:39.576 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:39.576 11:46:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:39.576 11:46:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.576 11:46:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.576 11:46:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:39.576 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:39.576 11:46:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.576 11:46:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.576 11:46:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:39.576 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:39.576 11:46:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.576 11:46:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:39.576 11:46:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:39.576 11:46:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:39.576 11:46:08 -- nvmf/common.sh@57 -- # uname 00:18:39.576 11:46:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:39.576 11:46:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:39.576 11:46:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:39.576 11:46:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:39.576 11:46:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:39.576 11:46:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:39.576 11:46:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:39.576 11:46:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:39.576 11:46:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:39.576 11:46:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:39.576 11:46:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:39.576 11:46:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:39.576 11:46:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:39.576 11:46:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:39.576 11:46:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:39.576 11:46:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:39.576 11:46:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:39.576 11:46:08 -- nvmf/common.sh@104 -- # continue 2 00:18:39.576 11:46:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.576 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:39.576 11:46:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:39.576 11:46:08 -- nvmf/common.sh@104 -- # continue 2 00:18:39.576 11:46:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:39.576 11:46:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:39.577 11:46:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:39.577 11:46:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:39.577 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:39.577 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:39.577 altname enp217s0f0np0 00:18:39.577 altname ens818f0np0 00:18:39.577 inet 192.168.100.8/24 scope global mlx_0_0 00:18:39.577 valid_lft forever preferred_lft forever 00:18:39.577 11:46:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:39.577 11:46:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:39.577 11:46:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:39.577 11:46:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:39.577 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:39.577 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:39.577 altname enp217s0f1np1 00:18:39.577 altname ens818f1np1 00:18:39.577 inet 192.168.100.9/24 scope global mlx_0_1 00:18:39.577 valid_lft forever preferred_lft forever 00:18:39.577 11:46:08 -- nvmf/common.sh@410 -- # return 0 00:18:39.577 11:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:39.577 11:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:39.577 11:46:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:39.577 11:46:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:39.577 11:46:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:39.577 11:46:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:39.577 11:46:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:39.577 11:46:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:39.577 11:46:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:39.577 11:46:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:39.577 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.577 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@104 -- # continue 2 00:18:39.577 11:46:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:39.577 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.577 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.577 11:46:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:39.577 11:46:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@104 -- # continue 2 00:18:39.577 11:46:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:39.577 11:46:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:39.577 11:46:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:39.577 11:46:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:39.577 11:46:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:39.577 11:46:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:39.577 192.168.100.9' 00:18:39.577 11:46:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:39.577 192.168.100.9' 00:18:39.577 11:46:08 -- nvmf/common.sh@445 -- # head -n 1 00:18:39.577 11:46:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:39.577 11:46:08 -- nvmf/common.sh@446 -- # head -n 1 00:18:39.577 11:46:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:39.577 192.168.100.9' 00:18:39.577 11:46:08 -- nvmf/common.sh@446 -- # tail -n +2 00:18:39.577 11:46:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:39.577 11:46:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:39.577 11:46:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:39.577 11:46:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:39.577 11:46:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:39.577 11:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:39.577 11:46:08 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:39.577 11:46:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:39.577 11:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.577 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:08 -- nvmf/common.sh@469 -- # nvmfpid=3753212 00:18:39.577 11:46:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:39.577 11:46:08 -- nvmf/common.sh@470 -- # waitforlisten 3753212 00:18:39.577 11:46:08 -- common/autotest_common.sh@829 -- # '[' -z 3753212 ']' 00:18:39.577 11:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.577 11:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.577 11:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.577 11:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.577 11:46:08 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 [2024-12-03 11:46:08.984580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.577 [2024-12-03 11:46:08.984627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.577 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.577 [2024-12-03 11:46:09.051257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.577 [2024-12-03 11:46:09.124974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:39.577 [2024-12-03 11:46:09.125105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.577 [2024-12-03 11:46:09.125121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.577 [2024-12-03 11:46:09.125130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.577 [2024-12-03 11:46:09.125191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.577 [2024-12-03 11:46:09.125209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.577 [2024-12-03 11:46:09.125311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.577 [2024-12-03 11:46:09.125313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.577 11:46:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.577 11:46:09 -- common/autotest_common.sh@862 -- # return 0 00:18:39.577 11:46:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:39.577 11:46:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.577 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.577 11:46:09 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:39.577 11:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:09 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:39.577 11:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:09 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:39.577 11:46:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 [2024-12-03 11:46:09.952565] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10d90c0/0x10dd5b0) succeed. 00:18:39.577 [2024-12-03 11:46:09.961526] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10da6b0/0x111ec50) succeed. 00:18:39.577 11:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.577 11:46:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 Malloc0 00:18:39.577 11:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:39.577 11:46:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.577 11:46:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 11:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:39.577 11:46:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.577 11:46:10 -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 [2024-12-03 11:46:10.141976] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:39.577 11:46:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3753498 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:39.577 11:46:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=3753500 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # config=() 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:39.578 11:46:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:39.578 { 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme$subsystem", 00:18:39.578 "trtype": "$TEST_TRANSPORT", 00:18:39.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "$NVMF_PORT", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.578 "hdgst": ${hdgst:-false}, 00:18:39.578 "ddgst": ${ddgst:-false} 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 } 00:18:39.578 EOF 00:18:39.578 )") 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3753502 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # config=() 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:39.578 11:46:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:39.578 { 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme$subsystem", 00:18:39.578 "trtype": "$TEST_TRANSPORT", 00:18:39.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "$NVMF_PORT", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.578 "hdgst": ${hdgst:-false}, 00:18:39.578 "ddgst": ${ddgst:-false} 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 } 00:18:39.578 EOF 00:18:39.578 )") 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3753505 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # cat 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@35 -- # sync 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # config=() 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:39.578 11:46:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:39.578 { 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme$subsystem", 00:18:39.578 "trtype": "$TEST_TRANSPORT", 00:18:39.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "$NVMF_PORT", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.578 "hdgst": ${hdgst:-false}, 00:18:39.578 "ddgst": ${ddgst:-false} 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 } 00:18:39.578 EOF 00:18:39.578 )") 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # config=() 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # cat 00:18:39.578 11:46:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:39.578 11:46:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:39.578 { 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme$subsystem", 00:18:39.578 "trtype": "$TEST_TRANSPORT", 00:18:39.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "$NVMF_PORT", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.578 "hdgst": ${hdgst:-false}, 00:18:39.578 "ddgst": ${ddgst:-false} 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 } 00:18:39.578 EOF 00:18:39.578 )") 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # cat 00:18:39.578 11:46:10 -- target/bdev_io_wait.sh@37 -- # wait 3753498 00:18:39.578 11:46:10 -- nvmf/common.sh@542 -- # cat 00:18:39.578 11:46:10 -- nvmf/common.sh@544 -- # jq . 00:18:39.578 11:46:10 -- nvmf/common.sh@544 -- # jq . 00:18:39.578 11:46:10 -- nvmf/common.sh@544 -- # jq . 00:18:39.578 11:46:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:39.578 11:46:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme1", 00:18:39.578 "trtype": "rdma", 00:18:39.578 "traddr": "192.168.100.8", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "4420", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.578 "hdgst": false, 00:18:39.578 "ddgst": false 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 }' 00:18:39.578 11:46:10 -- nvmf/common.sh@544 -- # jq . 00:18:39.578 11:46:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:39.578 11:46:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme1", 00:18:39.578 "trtype": "rdma", 00:18:39.578 "traddr": "192.168.100.8", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "4420", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.578 "hdgst": false, 00:18:39.578 "ddgst": false 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 }' 00:18:39.578 11:46:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:39.578 11:46:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme1", 00:18:39.578 "trtype": "rdma", 00:18:39.578 "traddr": "192.168.100.8", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "4420", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.578 "hdgst": false, 00:18:39.578 "ddgst": false 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 }' 00:18:39.578 11:46:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:39.578 11:46:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:39.578 "params": { 00:18:39.578 "name": "Nvme1", 00:18:39.578 "trtype": "rdma", 00:18:39.578 "traddr": "192.168.100.8", 00:18:39.578 "adrfam": "ipv4", 00:18:39.578 "trsvcid": "4420", 00:18:39.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.578 "hdgst": false, 00:18:39.578 "ddgst": false 00:18:39.578 }, 00:18:39.578 "method": "bdev_nvme_attach_controller" 00:18:39.578 }' 00:18:39.836 [2024-12-03 11:46:10.189694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.836 [2024-12-03 11:46:10.189748] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:39.836 [2024-12-03 11:46:10.191704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.836 [2024-12-03 11:46:10.191752] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:39.836 [2024-12-03 11:46:10.191810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.836 [2024-12-03 11:46:10.191851] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:39.836 [2024-12-03 11:46:10.193226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.836 [2024-12-03 11:46:10.193273] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:39.836 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.836 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.836 [2024-12-03 11:46:10.380151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.836 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.094 [2024-12-03 11:46:10.451777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:40.094 [2024-12-03 11:46:10.478952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.094 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.094 [2024-12-03 11:46:10.525178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.094 [2024-12-03 11:46:10.564498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:40.094 [2024-12-03 11:46:10.599012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:40.094 [2024-12-03 11:46:10.602673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.094 [2024-12-03 11:46:10.674600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:40.351 Running I/O for 1 seconds... 00:18:40.351 Running I/O for 1 seconds... 00:18:40.351 Running I/O for 1 seconds... 00:18:40.351 Running I/O for 1 seconds... 00:18:41.283 00:18:41.283 Latency(us) 00:18:41.283 [2024-12-03T10:46:11.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.283 [2024-12-03T10:46:11.897Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:41.283 Nvme1n1 : 1.01 18287.72 71.44 0.00 0.00 6978.31 3748.66 13946.06 00:18:41.283 [2024-12-03T10:46:11.897Z] =================================================================================================================== 00:18:41.283 [2024-12-03T10:46:11.897Z] Total : 18287.72 71.44 0.00 0.00 6978.31 3748.66 13946.06 00:18:41.283 00:18:41.283 Latency(us) 00:18:41.283 [2024-12-03T10:46:11.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.283 [2024-12-03T10:46:11.897Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:41.283 Nvme1n1 : 1.00 17709.41 69.18 0.00 0.00 7208.57 4430.23 15728.64 00:18:41.283 [2024-12-03T10:46:11.897Z] =================================================================================================================== 00:18:41.283 [2024-12-03T10:46:11.897Z] Total : 17709.41 69.18 0.00 0.00 7208.57 4430.23 15728.64 00:18:41.283 00:18:41.283 Latency(us) 00:18:41.283 [2024-12-03T10:46:11.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.283 [2024-12-03T10:46:11.897Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:41.283 Nvme1n1 : 1.00 14947.32 58.39 0.00 0.00 8544.45 3643.80 20342.37 00:18:41.283 [2024-12-03T10:46:11.897Z] =================================================================================================================== 00:18:41.283 [2024-12-03T10:46:11.897Z] Total : 14947.32 58.39 0.00 0.00 8544.45 3643.80 20342.37 00:18:41.283 00:18:41.283 Latency(us) 00:18:41.283 [2024-12-03T10:46:11.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.283 [2024-12-03T10:46:11.897Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:41.283 Nvme1n1 : 1.00 264419.71 1032.89 0.00 0.00 482.77 192.51 1644.95 00:18:41.283 [2024-12-03T10:46:11.897Z] =================================================================================================================== 00:18:41.283 [2024-12-03T10:46:11.897Z] Total : 264419.71 1032.89 0.00 0.00 482.77 192.51 1644.95 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@38 -- # wait 3753500 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@39 -- # wait 3753502 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@40 -- # wait 3753505 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.542 11:46:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.542 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:18:41.542 11:46:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:41.542 11:46:12 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:41.542 11:46:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:41.542 11:46:12 -- nvmf/common.sh@116 -- # sync 00:18:41.542 11:46:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:41.542 11:46:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:41.542 11:46:12 -- nvmf/common.sh@119 -- # set +e 00:18:41.542 11:46:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:41.542 11:46:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:41.801 rmmod nvme_rdma 00:18:41.801 rmmod nvme_fabrics 00:18:41.801 11:46:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:41.801 11:46:12 -- nvmf/common.sh@123 -- # set -e 00:18:41.801 11:46:12 -- nvmf/common.sh@124 -- # return 0 00:18:41.801 11:46:12 -- nvmf/common.sh@477 -- # '[' -n 3753212 ']' 00:18:41.801 11:46:12 -- nvmf/common.sh@478 -- # killprocess 3753212 00:18:41.801 11:46:12 -- common/autotest_common.sh@936 -- # '[' -z 3753212 ']' 00:18:41.801 11:46:12 -- common/autotest_common.sh@940 -- # kill -0 3753212 00:18:41.801 11:46:12 -- common/autotest_common.sh@941 -- # uname 00:18:41.801 11:46:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.801 11:46:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3753212 00:18:41.801 11:46:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:41.801 11:46:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:41.801 11:46:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3753212' 00:18:41.801 killing process with pid 3753212 00:18:41.801 11:46:12 -- common/autotest_common.sh@955 -- # kill 3753212 00:18:41.801 11:46:12 -- common/autotest_common.sh@960 -- # wait 3753212 00:18:42.059 11:46:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:42.059 11:46:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:42.059 00:18:42.059 real 0m10.704s 00:18:42.059 user 0m21.268s 00:18:42.059 sys 0m6.603s 00:18:42.059 11:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.059 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:18:42.059 ************************************ 00:18:42.059 END TEST nvmf_bdev_io_wait 00:18:42.059 ************************************ 00:18:42.059 11:46:12 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:42.059 11:46:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.059 11:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.059 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:18:42.059 ************************************ 00:18:42.059 START TEST nvmf_queue_depth 00:18:42.060 ************************************ 00:18:42.060 11:46:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:42.060 * Looking for test storage... 00:18:42.060 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:42.060 11:46:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:42.060 11:46:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:42.060 11:46:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:42.318 11:46:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:42.318 11:46:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:42.318 11:46:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:42.318 11:46:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:42.318 11:46:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:42.318 11:46:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:42.318 11:46:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.318 11:46:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:42.318 11:46:12 -- scripts/common.sh@337 -- # local 'op=<' 00:18:42.318 11:46:12 -- scripts/common.sh@339 -- # ver1_l=2 00:18:42.318 11:46:12 -- scripts/common.sh@340 -- # ver2_l=1 00:18:42.318 11:46:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:42.318 11:46:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:42.318 11:46:12 -- scripts/common.sh@344 -- # : 1 00:18:42.318 11:46:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:42.318 11:46:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.318 11:46:12 -- scripts/common.sh@364 -- # decimal 1 00:18:42.318 11:46:12 -- scripts/common.sh@352 -- # local d=1 00:18:42.318 11:46:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.318 11:46:12 -- scripts/common.sh@354 -- # echo 1 00:18:42.318 11:46:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:42.318 11:46:12 -- scripts/common.sh@365 -- # decimal 2 00:18:42.318 11:46:12 -- scripts/common.sh@352 -- # local d=2 00:18:42.318 11:46:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.318 11:46:12 -- scripts/common.sh@354 -- # echo 2 00:18:42.318 11:46:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:42.318 11:46:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:42.318 11:46:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:42.318 11:46:12 -- scripts/common.sh@367 -- # return 0 00:18:42.318 11:46:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.318 11:46:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.319 --rc genhtml_branch_coverage=1 00:18:42.319 --rc genhtml_function_coverage=1 00:18:42.319 --rc genhtml_legend=1 00:18:42.319 --rc geninfo_all_blocks=1 00:18:42.319 --rc geninfo_unexecuted_blocks=1 00:18:42.319 00:18:42.319 ' 00:18:42.319 11:46:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.319 --rc genhtml_branch_coverage=1 00:18:42.319 --rc genhtml_function_coverage=1 00:18:42.319 --rc genhtml_legend=1 00:18:42.319 --rc geninfo_all_blocks=1 00:18:42.319 --rc geninfo_unexecuted_blocks=1 00:18:42.319 00:18:42.319 ' 00:18:42.319 11:46:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.319 --rc genhtml_branch_coverage=1 00:18:42.319 --rc genhtml_function_coverage=1 00:18:42.319 --rc genhtml_legend=1 00:18:42.319 --rc geninfo_all_blocks=1 00:18:42.319 --rc geninfo_unexecuted_blocks=1 00:18:42.319 00:18:42.319 ' 00:18:42.319 11:46:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.319 --rc genhtml_branch_coverage=1 00:18:42.319 --rc genhtml_function_coverage=1 00:18:42.319 --rc genhtml_legend=1 00:18:42.319 --rc geninfo_all_blocks=1 00:18:42.319 --rc geninfo_unexecuted_blocks=1 00:18:42.319 00:18:42.319 ' 00:18:42.319 11:46:12 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.319 11:46:12 -- nvmf/common.sh@7 -- # uname -s 00:18:42.319 11:46:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.319 11:46:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.319 11:46:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.319 11:46:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.319 11:46:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.319 11:46:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.319 11:46:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.319 11:46:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.319 11:46:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.319 11:46:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.319 11:46:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:42.319 11:46:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:42.319 11:46:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.319 11:46:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.319 11:46:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.319 11:46:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:42.319 11:46:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.319 11:46:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.319 11:46:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.319 11:46:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.319 11:46:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.319 11:46:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.319 11:46:12 -- paths/export.sh@5 -- # export PATH 00:18:42.319 11:46:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.319 11:46:12 -- nvmf/common.sh@46 -- # : 0 00:18:42.319 11:46:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:42.319 11:46:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:42.319 11:46:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:42.319 11:46:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.319 11:46:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.319 11:46:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:42.319 11:46:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:42.319 11:46:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:42.319 11:46:12 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:42.319 11:46:12 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:42.319 11:46:12 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.319 11:46:12 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:42.319 11:46:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:42.319 11:46:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.319 11:46:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:42.319 11:46:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:42.319 11:46:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:42.319 11:46:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.319 11:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.319 11:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.319 11:46:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:42.319 11:46:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:42.319 11:46:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:42.319 11:46:12 -- common/autotest_common.sh@10 -- # set +x 00:18:48.883 11:46:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.883 11:46:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:48.883 11:46:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:48.883 11:46:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:48.883 11:46:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:48.883 11:46:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:48.883 11:46:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:48.883 11:46:19 -- nvmf/common.sh@294 -- # net_devs=() 00:18:48.883 11:46:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:48.883 11:46:19 -- nvmf/common.sh@295 -- # e810=() 00:18:48.883 11:46:19 -- nvmf/common.sh@295 -- # local -ga e810 00:18:48.883 11:46:19 -- nvmf/common.sh@296 -- # x722=() 00:18:48.883 11:46:19 -- nvmf/common.sh@296 -- # local -ga x722 00:18:48.883 11:46:19 -- nvmf/common.sh@297 -- # mlx=() 00:18:48.883 11:46:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:48.883 11:46:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.883 11:46:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.883 11:46:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.883 11:46:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.884 11:46:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:48.884 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:48.884 11:46:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.884 11:46:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:48.884 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:48.884 11:46:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.884 11:46:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.884 11:46:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.884 11:46:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:48.884 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.884 11:46:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.884 11:46:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:48.884 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.884 11:46:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:48.884 11:46:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:48.884 11:46:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:48.884 11:46:19 -- nvmf/common.sh@57 -- # uname 00:18:48.884 11:46:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:48.884 11:46:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:48.884 11:46:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:48.884 11:46:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:48.884 11:46:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:48.884 11:46:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:48.884 11:46:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:48.884 11:46:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:48.884 11:46:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:48.884 11:46:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:48.884 11:46:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:48.884 11:46:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.884 11:46:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:48.884 11:46:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:48.884 11:46:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.884 11:46:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@104 -- # continue 2 00:18:48.884 11:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@104 -- # continue 2 00:18:48.884 11:46:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:48.884 11:46:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.884 11:46:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:48.884 11:46:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:48.884 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.884 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:48.884 altname enp217s0f0np0 00:18:48.884 altname ens818f0np0 00:18:48.884 inet 192.168.100.8/24 scope global mlx_0_0 00:18:48.884 valid_lft forever preferred_lft forever 00:18:48.884 11:46:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:48.884 11:46:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.884 11:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.884 11:46:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:48.884 11:46:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:48.884 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.884 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:48.884 altname enp217s0f1np1 00:18:48.884 altname ens818f1np1 00:18:48.884 inet 192.168.100.9/24 scope global mlx_0_1 00:18:48.884 valid_lft forever preferred_lft forever 00:18:48.884 11:46:19 -- nvmf/common.sh@410 -- # return 0 00:18:48.884 11:46:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:48.884 11:46:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:48.884 11:46:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:48.884 11:46:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:48.884 11:46:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.884 11:46:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:48.884 11:46:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:48.884 11:46:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.884 11:46:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:48.884 11:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:48.884 11:46:19 -- nvmf/common.sh@104 -- # continue 2 00:18:48.884 11:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.884 11:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.884 11:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:48.884 11:46:19 -- nvmf/common.sh@104 -- # continue 2 00:18:48.884 11:46:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:48.885 11:46:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:48.885 11:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.885 11:46:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:48.885 11:46:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:48.885 11:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.885 11:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.885 11:46:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:48.885 192.168.100.9' 00:18:48.885 11:46:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:48.885 192.168.100.9' 00:18:48.885 11:46:19 -- nvmf/common.sh@445 -- # head -n 1 00:18:48.885 11:46:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:48.885 11:46:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:48.885 192.168.100.9' 00:18:48.885 11:46:19 -- nvmf/common.sh@446 -- # tail -n +2 00:18:48.885 11:46:19 -- nvmf/common.sh@446 -- # head -n 1 00:18:48.885 11:46:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:48.885 11:46:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:48.885 11:46:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:48.885 11:46:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:48.885 11:46:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:48.885 11:46:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:48.885 11:46:19 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:48.885 11:46:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:48.885 11:46:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.885 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 11:46:19 -- nvmf/common.sh@469 -- # nvmfpid=3757200 00:18:48.885 11:46:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.885 11:46:19 -- nvmf/common.sh@470 -- # waitforlisten 3757200 00:18:48.885 11:46:19 -- common/autotest_common.sh@829 -- # '[' -z 3757200 ']' 00:18:48.885 11:46:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.885 11:46:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.885 11:46:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.885 11:46:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.885 11:46:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 [2024-12-03 11:46:19.448494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:48.885 [2024-12-03 11:46:19.448547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.885 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.143 [2024-12-03 11:46:19.520001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.143 [2024-12-03 11:46:19.588722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:49.143 [2024-12-03 11:46:19.588835] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.143 [2024-12-03 11:46:19.588845] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.143 [2024-12-03 11:46:19.588854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.143 [2024-12-03 11:46:19.588875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.708 11:46:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.708 11:46:20 -- common/autotest_common.sh@862 -- # return 0 00:18:49.708 11:46:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:49.709 11:46:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.709 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.709 11:46:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.709 11:46:20 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:49.709 11:46:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.709 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.966 [2024-12-03 11:46:20.332455] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1526230/0x152a720) succeed. 00:18:49.966 [2024-12-03 11:46:20.341448] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1527730/0x156bdc0) succeed. 00:18:49.967 11:46:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.967 11:46:20 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.967 11:46:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.967 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 Malloc0 00:18:49.967 11:46:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.967 11:46:20 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.967 11:46:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.967 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 11:46:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.967 11:46:20 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.967 11:46:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.967 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 11:46:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.967 11:46:20 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:49.967 11:46:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.967 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 [2024-12-03 11:46:20.434347] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:49.967 11:46:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.967 11:46:20 -- target/queue_depth.sh@30 -- # bdevperf_pid=3757278 00:18:49.967 11:46:20 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:49.967 11:46:20 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.967 11:46:20 -- target/queue_depth.sh@33 -- # waitforlisten 3757278 /var/tmp/bdevperf.sock 00:18:49.967 11:46:20 -- common/autotest_common.sh@829 -- # '[' -z 3757278 ']' 00:18:49.967 11:46:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.967 11:46:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.967 11:46:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.967 11:46:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.967 11:46:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 [2024-12-03 11:46:20.485345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:49.967 [2024-12-03 11:46:20.485403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757278 ] 00:18:49.967 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.967 [2024-12-03 11:46:20.554628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.225 [2024-12-03 11:46:20.630443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.789 11:46:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.789 11:46:21 -- common/autotest_common.sh@862 -- # return 0 00:18:50.789 11:46:21 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:50.789 11:46:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.789 11:46:21 -- common/autotest_common.sh@10 -- # set +x 00:18:51.046 NVMe0n1 00:18:51.046 11:46:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.046 11:46:21 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.046 Running I/O for 10 seconds... 00:19:01.017 00:19:01.017 Latency(us) 00:19:01.017 [2024-12-03T10:46:31.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.017 [2024-12-03T10:46:31.631Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:01.017 Verification LBA range: start 0x0 length 0x4000 00:19:01.017 NVMe0n1 : 10.03 29446.86 115.03 0.00 0.00 34696.82 7864.32 35232.15 00:19:01.017 [2024-12-03T10:46:31.631Z] =================================================================================================================== 00:19:01.017 [2024-12-03T10:46:31.631Z] Total : 29446.86 115.03 0.00 0.00 34696.82 7864.32 35232.15 00:19:01.017 0 00:19:01.017 11:46:31 -- target/queue_depth.sh@39 -- # killprocess 3757278 00:19:01.017 11:46:31 -- common/autotest_common.sh@936 -- # '[' -z 3757278 ']' 00:19:01.017 11:46:31 -- common/autotest_common.sh@940 -- # kill -0 3757278 00:19:01.017 11:46:31 -- common/autotest_common.sh@941 -- # uname 00:19:01.017 11:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.017 11:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3757278 00:19:01.274 11:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:01.274 11:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:01.274 11:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3757278' 00:19:01.274 killing process with pid 3757278 00:19:01.274 11:46:31 -- common/autotest_common.sh@955 -- # kill 3757278 00:19:01.274 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.274 00:19:01.274 Latency(us) 00:19:01.274 [2024-12-03T10:46:31.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.274 [2024-12-03T10:46:31.888Z] =================================================================================================================== 00:19:01.274 [2024-12-03T10:46:31.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.274 11:46:31 -- common/autotest_common.sh@960 -- # wait 3757278 00:19:01.274 11:46:31 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:01.274 11:46:31 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:01.274 11:46:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:01.274 11:46:31 -- nvmf/common.sh@116 -- # sync 00:19:01.274 11:46:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:01.274 11:46:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:01.274 11:46:31 -- nvmf/common.sh@119 -- # set +e 00:19:01.274 11:46:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:01.274 11:46:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:01.274 rmmod nvme_rdma 00:19:01.550 rmmod nvme_fabrics 00:19:01.550 11:46:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:01.550 11:46:31 -- nvmf/common.sh@123 -- # set -e 00:19:01.550 11:46:31 -- nvmf/common.sh@124 -- # return 0 00:19:01.550 11:46:31 -- nvmf/common.sh@477 -- # '[' -n 3757200 ']' 00:19:01.550 11:46:31 -- nvmf/common.sh@478 -- # killprocess 3757200 00:19:01.550 11:46:31 -- common/autotest_common.sh@936 -- # '[' -z 3757200 ']' 00:19:01.550 11:46:31 -- common/autotest_common.sh@940 -- # kill -0 3757200 00:19:01.550 11:46:31 -- common/autotest_common.sh@941 -- # uname 00:19:01.550 11:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.550 11:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3757200 00:19:01.550 11:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:01.550 11:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:01.550 11:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3757200' 00:19:01.550 killing process with pid 3757200 00:19:01.550 11:46:31 -- common/autotest_common.sh@955 -- # kill 3757200 00:19:01.550 11:46:31 -- common/autotest_common.sh@960 -- # wait 3757200 00:19:01.832 11:46:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:01.832 11:46:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:01.832 00:19:01.832 real 0m19.673s 00:19:01.832 user 0m26.428s 00:19:01.832 sys 0m5.799s 00:19:01.832 11:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:01.832 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:01.832 ************************************ 00:19:01.832 END TEST nvmf_queue_depth 00:19:01.832 ************************************ 00:19:01.833 11:46:32 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:01.833 11:46:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:01.833 11:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:01.833 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:01.833 ************************************ 00:19:01.833 START TEST nvmf_multipath 00:19:01.833 ************************************ 00:19:01.833 11:46:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:01.833 * Looking for test storage... 00:19:01.833 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:01.833 11:46:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:01.833 11:46:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:01.833 11:46:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:02.100 11:46:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:02.100 11:46:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:02.100 11:46:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:02.100 11:46:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:02.100 11:46:32 -- scripts/common.sh@335 -- # IFS=.-: 00:19:02.100 11:46:32 -- scripts/common.sh@335 -- # read -ra ver1 00:19:02.100 11:46:32 -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.100 11:46:32 -- scripts/common.sh@336 -- # read -ra ver2 00:19:02.100 11:46:32 -- scripts/common.sh@337 -- # local 'op=<' 00:19:02.100 11:46:32 -- scripts/common.sh@339 -- # ver1_l=2 00:19:02.100 11:46:32 -- scripts/common.sh@340 -- # ver2_l=1 00:19:02.100 11:46:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:02.100 11:46:32 -- scripts/common.sh@343 -- # case "$op" in 00:19:02.100 11:46:32 -- scripts/common.sh@344 -- # : 1 00:19:02.100 11:46:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:02.100 11:46:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.100 11:46:32 -- scripts/common.sh@364 -- # decimal 1 00:19:02.100 11:46:32 -- scripts/common.sh@352 -- # local d=1 00:19:02.100 11:46:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.100 11:46:32 -- scripts/common.sh@354 -- # echo 1 00:19:02.100 11:46:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:02.100 11:46:32 -- scripts/common.sh@365 -- # decimal 2 00:19:02.100 11:46:32 -- scripts/common.sh@352 -- # local d=2 00:19:02.100 11:46:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.100 11:46:32 -- scripts/common.sh@354 -- # echo 2 00:19:02.100 11:46:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:02.100 11:46:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:02.100 11:46:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:02.100 11:46:32 -- scripts/common.sh@367 -- # return 0 00:19:02.100 11:46:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.100 11:46:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.100 --rc genhtml_branch_coverage=1 00:19:02.100 --rc genhtml_function_coverage=1 00:19:02.100 --rc genhtml_legend=1 00:19:02.100 --rc geninfo_all_blocks=1 00:19:02.100 --rc geninfo_unexecuted_blocks=1 00:19:02.100 00:19:02.100 ' 00:19:02.100 11:46:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.100 --rc genhtml_branch_coverage=1 00:19:02.100 --rc genhtml_function_coverage=1 00:19:02.100 --rc genhtml_legend=1 00:19:02.100 --rc geninfo_all_blocks=1 00:19:02.100 --rc geninfo_unexecuted_blocks=1 00:19:02.100 00:19:02.100 ' 00:19:02.100 11:46:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.100 --rc genhtml_branch_coverage=1 00:19:02.100 --rc genhtml_function_coverage=1 00:19:02.100 --rc genhtml_legend=1 00:19:02.100 --rc geninfo_all_blocks=1 00:19:02.101 --rc geninfo_unexecuted_blocks=1 00:19:02.101 00:19:02.101 ' 00:19:02.101 11:46:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.101 --rc genhtml_branch_coverage=1 00:19:02.101 --rc genhtml_function_coverage=1 00:19:02.101 --rc genhtml_legend=1 00:19:02.101 --rc geninfo_all_blocks=1 00:19:02.101 --rc geninfo_unexecuted_blocks=1 00:19:02.101 00:19:02.101 ' 00:19:02.101 11:46:32 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.101 11:46:32 -- nvmf/common.sh@7 -- # uname -s 00:19:02.101 11:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.101 11:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.101 11:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.101 11:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.101 11:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.101 11:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.101 11:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.101 11:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.101 11:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.101 11:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.101 11:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.101 11:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:02.101 11:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.101 11:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.101 11:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.101 11:46:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.101 11:46:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.101 11:46:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.101 11:46:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.101 11:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.101 11:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.101 11:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.101 11:46:32 -- paths/export.sh@5 -- # export PATH 00:19:02.101 11:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.101 11:46:32 -- nvmf/common.sh@46 -- # : 0 00:19:02.101 11:46:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:02.101 11:46:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:02.101 11:46:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:02.101 11:46:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.101 11:46:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.101 11:46:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:02.101 11:46:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:02.101 11:46:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:02.101 11:46:32 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.101 11:46:32 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.101 11:46:32 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:02.101 11:46:32 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:02.101 11:46:32 -- target/multipath.sh@43 -- # nvmftestinit 00:19:02.101 11:46:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:02.101 11:46:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.101 11:46:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:02.101 11:46:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:02.101 11:46:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:02.101 11:46:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.101 11:46:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.101 11:46:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.101 11:46:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:02.101 11:46:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:02.101 11:46:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:02.101 11:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:08.661 11:46:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:08.661 11:46:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:08.661 11:46:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:08.661 11:46:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:08.661 11:46:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:08.661 11:46:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:08.661 11:46:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:08.661 11:46:38 -- nvmf/common.sh@294 -- # net_devs=() 00:19:08.661 11:46:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:08.661 11:46:38 -- nvmf/common.sh@295 -- # e810=() 00:19:08.661 11:46:38 -- nvmf/common.sh@295 -- # local -ga e810 00:19:08.661 11:46:38 -- nvmf/common.sh@296 -- # x722=() 00:19:08.661 11:46:38 -- nvmf/common.sh@296 -- # local -ga x722 00:19:08.661 11:46:38 -- nvmf/common.sh@297 -- # mlx=() 00:19:08.661 11:46:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:08.661 11:46:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.661 11:46:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:08.661 11:46:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:08.661 11:46:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:08.661 11:46:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:08.661 11:46:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:08.661 11:46:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:08.661 11:46:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:08.661 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:08.661 11:46:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.661 11:46:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:08.661 11:46:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:08.661 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:08.661 11:46:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:08.661 11:46:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.662 11:46:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.662 11:46:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.662 11:46:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:08.662 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.662 11:46:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.662 11:46:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.662 11:46:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:08.662 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.662 11:46:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:08.662 11:46:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:08.662 11:46:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:08.662 11:46:38 -- nvmf/common.sh@57 -- # uname 00:19:08.662 11:46:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:08.662 11:46:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:08.662 11:46:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:08.662 11:46:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:08.662 11:46:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:08.662 11:46:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:08.662 11:46:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:08.662 11:46:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:08.662 11:46:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:08.662 11:46:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:08.662 11:46:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:08.662 11:46:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:08.662 11:46:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:08.662 11:46:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:08.662 11:46:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@104 -- # continue 2 00:19:08.662 11:46:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@104 -- # continue 2 00:19:08.662 11:46:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:08.662 11:46:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:08.662 11:46:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:08.662 11:46:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:08.662 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:08.662 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:08.662 altname enp217s0f0np0 00:19:08.662 altname ens818f0np0 00:19:08.662 inet 192.168.100.8/24 scope global mlx_0_0 00:19:08.662 valid_lft forever preferred_lft forever 00:19:08.662 11:46:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:08.662 11:46:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:08.662 11:46:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:08.662 11:46:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:08.662 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:08.662 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:08.662 altname enp217s0f1np1 00:19:08.662 altname ens818f1np1 00:19:08.662 inet 192.168.100.9/24 scope global mlx_0_1 00:19:08.662 valid_lft forever preferred_lft forever 00:19:08.662 11:46:38 -- nvmf/common.sh@410 -- # return 0 00:19:08.662 11:46:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:08.662 11:46:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:08.662 11:46:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:08.662 11:46:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:08.662 11:46:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:08.662 11:46:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:08.662 11:46:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:08.662 11:46:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:08.662 11:46:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@104 -- # continue 2 00:19:08.662 11:46:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.662 11:46:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:08.662 11:46:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@104 -- # continue 2 00:19:08.662 11:46:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:08.662 11:46:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:08.662 11:46:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:08.662 11:46:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:08.662 11:46:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:08.662 11:46:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:08.662 192.168.100.9' 00:19:08.662 11:46:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:08.662 192.168.100.9' 00:19:08.662 11:46:38 -- nvmf/common.sh@445 -- # head -n 1 00:19:08.662 11:46:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:08.662 11:46:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:08.662 192.168.100.9' 00:19:08.662 11:46:38 -- nvmf/common.sh@446 -- # tail -n +2 00:19:08.662 11:46:38 -- nvmf/common.sh@446 -- # head -n 1 00:19:08.662 11:46:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:08.662 11:46:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:08.662 11:46:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:08.662 11:46:38 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:08.662 11:46:38 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:08.662 11:46:38 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:08.662 run this test only with TCP transport for now 00:19:08.662 11:46:38 -- target/multipath.sh@53 -- # nvmftestfini 00:19:08.662 11:46:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:08.662 11:46:38 -- nvmf/common.sh@116 -- # sync 00:19:08.662 11:46:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@119 -- # set +e 00:19:08.662 11:46:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:08.662 11:46:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:08.662 rmmod nvme_rdma 00:19:08.662 rmmod nvme_fabrics 00:19:08.662 11:46:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:08.662 11:46:38 -- nvmf/common.sh@123 -- # set -e 00:19:08.662 11:46:38 -- nvmf/common.sh@124 -- # return 0 00:19:08.662 11:46:38 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:08.662 11:46:38 -- target/multipath.sh@54 -- # exit 0 00:19:08.662 11:46:38 -- target/multipath.sh@1 -- # nvmftestfini 00:19:08.662 11:46:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:08.662 11:46:38 -- nvmf/common.sh@116 -- # sync 00:19:08.662 11:46:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:08.662 11:46:38 -- nvmf/common.sh@119 -- # set +e 00:19:08.662 11:46:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:08.662 11:46:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:08.662 11:46:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:08.663 11:46:38 -- nvmf/common.sh@123 -- # set -e 00:19:08.663 11:46:38 -- nvmf/common.sh@124 -- # return 0 00:19:08.663 11:46:38 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:08.663 00:19:08.663 real 0m6.405s 00:19:08.663 user 0m1.692s 00:19:08.663 sys 0m4.875s 00:19:08.663 11:46:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:08.663 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.663 ************************************ 00:19:08.663 END TEST nvmf_multipath 00:19:08.663 ************************************ 00:19:08.663 11:46:38 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:08.663 11:46:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.663 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.663 ************************************ 00:19:08.663 START TEST nvmf_zcopy 00:19:08.663 ************************************ 00:19:08.663 11:46:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:08.663 * Looking for test storage... 00:19:08.663 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:08.663 11:46:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:08.663 11:46:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:08.663 11:46:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:08.663 11:46:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:08.663 11:46:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:08.663 11:46:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:08.663 11:46:38 -- scripts/common.sh@335 -- # IFS=.-: 00:19:08.663 11:46:38 -- scripts/common.sh@335 -- # read -ra ver1 00:19:08.663 11:46:38 -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.663 11:46:38 -- scripts/common.sh@336 -- # read -ra ver2 00:19:08.663 11:46:38 -- scripts/common.sh@337 -- # local 'op=<' 00:19:08.663 11:46:38 -- scripts/common.sh@339 -- # ver1_l=2 00:19:08.663 11:46:38 -- scripts/common.sh@340 -- # ver2_l=1 00:19:08.663 11:46:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:08.663 11:46:38 -- scripts/common.sh@343 -- # case "$op" in 00:19:08.663 11:46:38 -- scripts/common.sh@344 -- # : 1 00:19:08.663 11:46:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:08.663 11:46:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.663 11:46:38 -- scripts/common.sh@364 -- # decimal 1 00:19:08.663 11:46:38 -- scripts/common.sh@352 -- # local d=1 00:19:08.663 11:46:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.663 11:46:38 -- scripts/common.sh@354 -- # echo 1 00:19:08.663 11:46:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:08.663 11:46:38 -- scripts/common.sh@365 -- # decimal 2 00:19:08.663 11:46:38 -- scripts/common.sh@352 -- # local d=2 00:19:08.663 11:46:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.663 11:46:38 -- scripts/common.sh@354 -- # echo 2 00:19:08.663 11:46:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:08.663 11:46:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:08.663 11:46:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:08.663 11:46:38 -- scripts/common.sh@367 -- # return 0 00:19:08.663 11:46:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.663 --rc genhtml_branch_coverage=1 00:19:08.663 --rc genhtml_function_coverage=1 00:19:08.663 --rc genhtml_legend=1 00:19:08.663 --rc geninfo_all_blocks=1 00:19:08.663 --rc geninfo_unexecuted_blocks=1 00:19:08.663 00:19:08.663 ' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.663 --rc genhtml_branch_coverage=1 00:19:08.663 --rc genhtml_function_coverage=1 00:19:08.663 --rc genhtml_legend=1 00:19:08.663 --rc geninfo_all_blocks=1 00:19:08.663 --rc geninfo_unexecuted_blocks=1 00:19:08.663 00:19:08.663 ' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.663 --rc genhtml_branch_coverage=1 00:19:08.663 --rc genhtml_function_coverage=1 00:19:08.663 --rc genhtml_legend=1 00:19:08.663 --rc geninfo_all_blocks=1 00:19:08.663 --rc geninfo_unexecuted_blocks=1 00:19:08.663 00:19:08.663 ' 00:19:08.663 11:46:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.663 --rc genhtml_branch_coverage=1 00:19:08.663 --rc genhtml_function_coverage=1 00:19:08.663 --rc genhtml_legend=1 00:19:08.663 --rc geninfo_all_blocks=1 00:19:08.663 --rc geninfo_unexecuted_blocks=1 00:19:08.663 00:19:08.663 ' 00:19:08.663 11:46:38 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.663 11:46:38 -- nvmf/common.sh@7 -- # uname -s 00:19:08.663 11:46:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.663 11:46:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.663 11:46:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.663 11:46:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.663 11:46:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.663 11:46:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.663 11:46:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.663 11:46:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.663 11:46:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.663 11:46:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.663 11:46:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:08.663 11:46:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:08.663 11:46:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.663 11:46:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.663 11:46:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.663 11:46:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:08.663 11:46:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.663 11:46:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.663 11:46:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.663 11:46:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.663 11:46:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.663 11:46:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.663 11:46:38 -- paths/export.sh@5 -- # export PATH 00:19:08.663 11:46:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.663 11:46:38 -- nvmf/common.sh@46 -- # : 0 00:19:08.663 11:46:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.663 11:46:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.663 11:46:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.663 11:46:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.663 11:46:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.663 11:46:38 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:08.663 11:46:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:08.663 11:46:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.663 11:46:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.663 11:46:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.663 11:46:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.663 11:46:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.663 11:46:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.663 11:46:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.663 11:46:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:08.663 11:46:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.663 11:46:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.663 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:19:15.224 11:46:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:15.224 11:46:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:15.224 11:46:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:15.224 11:46:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:15.225 11:46:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:15.225 11:46:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:15.225 11:46:45 -- nvmf/common.sh@294 -- # net_devs=() 00:19:15.225 11:46:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@295 -- # e810=() 00:19:15.225 11:46:45 -- nvmf/common.sh@295 -- # local -ga e810 00:19:15.225 11:46:45 -- nvmf/common.sh@296 -- # x722=() 00:19:15.225 11:46:45 -- nvmf/common.sh@296 -- # local -ga x722 00:19:15.225 11:46:45 -- nvmf/common.sh@297 -- # mlx=() 00:19:15.225 11:46:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:15.225 11:46:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.225 11:46:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:15.225 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:15.225 11:46:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:15.225 11:46:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:15.225 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:15.225 11:46:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:15.225 11:46:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.225 11:46:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.225 11:46:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:15.225 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.225 11:46:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.225 11:46:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:15.225 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.225 11:46:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:15.225 11:46:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:15.225 11:46:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:15.225 11:46:45 -- nvmf/common.sh@57 -- # uname 00:19:15.225 11:46:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:15.225 11:46:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:15.225 11:46:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:15.225 11:46:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:15.225 11:46:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:15.225 11:46:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:15.225 11:46:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:15.225 11:46:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:15.225 11:46:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:15.225 11:46:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:15.225 11:46:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:15.225 11:46:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:15.225 11:46:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:15.225 11:46:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@104 -- # continue 2 00:19:15.225 11:46:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@104 -- # continue 2 00:19:15.225 11:46:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:15.225 11:46:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:15.225 11:46:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:15.225 11:46:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:15.225 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:15.225 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:15.225 altname enp217s0f0np0 00:19:15.225 altname ens818f0np0 00:19:15.225 inet 192.168.100.8/24 scope global mlx_0_0 00:19:15.225 valid_lft forever preferred_lft forever 00:19:15.225 11:46:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:15.225 11:46:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:15.225 11:46:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:15.225 11:46:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:15.225 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:15.225 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:15.225 altname enp217s0f1np1 00:19:15.225 altname ens818f1np1 00:19:15.225 inet 192.168.100.9/24 scope global mlx_0_1 00:19:15.225 valid_lft forever preferred_lft forever 00:19:15.225 11:46:45 -- nvmf/common.sh@410 -- # return 0 00:19:15.225 11:46:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:15.225 11:46:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:15.225 11:46:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:15.225 11:46:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:15.225 11:46:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:15.225 11:46:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:15.225 11:46:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:15.225 11:46:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:15.225 11:46:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@104 -- # continue 2 00:19:15.225 11:46:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:15.225 11:46:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:15.225 11:46:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:15.225 11:46:45 -- nvmf/common.sh@104 -- # continue 2 00:19:15.225 11:46:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:15.225 11:46:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:15.225 11:46:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:15.226 11:46:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:15.226 11:46:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:15.226 11:46:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:15.226 11:46:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:15.226 11:46:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:15.226 11:46:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:15.226 11:46:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:15.226 11:46:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:15.226 11:46:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:15.226 192.168.100.9' 00:19:15.226 11:46:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:15.226 192.168.100.9' 00:19:15.226 11:46:45 -- nvmf/common.sh@445 -- # head -n 1 00:19:15.226 11:46:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:15.226 11:46:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:15.226 192.168.100.9' 00:19:15.226 11:46:45 -- nvmf/common.sh@446 -- # tail -n +2 00:19:15.226 11:46:45 -- nvmf/common.sh@446 -- # head -n 1 00:19:15.226 11:46:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:15.226 11:46:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:15.226 11:46:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:15.226 11:46:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:15.226 11:46:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:15.226 11:46:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:15.226 11:46:45 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:15.226 11:46:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:15.226 11:46:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.226 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 11:46:45 -- nvmf/common.sh@469 -- # nvmfpid=3765801 00:19:15.226 11:46:45 -- nvmf/common.sh@470 -- # waitforlisten 3765801 00:19:15.226 11:46:45 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.226 11:46:45 -- common/autotest_common.sh@829 -- # '[' -z 3765801 ']' 00:19:15.226 11:46:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.226 11:46:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.226 11:46:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.226 11:46:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.226 11:46:45 -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 [2024-12-03 11:46:45.779326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:15.226 [2024-12-03 11:46:45.779373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.226 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.484 [2024-12-03 11:46:45.849052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.484 [2024-12-03 11:46:45.919270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:15.484 [2024-12-03 11:46:45.919382] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.484 [2024-12-03 11:46:45.919394] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.484 [2024-12-03 11:46:45.919403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.484 [2024-12-03 11:46:45.919424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.051 11:46:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.051 11:46:46 -- common/autotest_common.sh@862 -- # return 0 00:19:16.051 11:46:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:16.051 11:46:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.051 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.051 11:46:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.051 11:46:46 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:16.051 11:46:46 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:16.051 Unsupported transport: rdma 00:19:16.051 11:46:46 -- target/zcopy.sh@17 -- # exit 0 00:19:16.051 11:46:46 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:16.051 11:46:46 -- common/autotest_common.sh@806 -- # type=--id 00:19:16.051 11:46:46 -- common/autotest_common.sh@807 -- # id=0 00:19:16.051 11:46:46 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:16.051 11:46:46 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:16.051 11:46:46 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:16.051 11:46:46 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:16.051 11:46:46 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:16.051 11:46:46 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:16.051 nvmf_trace.0 00:19:16.310 11:46:46 -- common/autotest_common.sh@821 -- # return 0 00:19:16.310 11:46:46 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:16.310 11:46:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.310 11:46:46 -- nvmf/common.sh@116 -- # sync 00:19:16.310 11:46:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:16.310 11:46:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:16.310 11:46:46 -- nvmf/common.sh@119 -- # set +e 00:19:16.310 11:46:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.310 11:46:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:16.310 rmmod nvme_rdma 00:19:16.310 rmmod nvme_fabrics 00:19:16.310 11:46:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.310 11:46:46 -- nvmf/common.sh@123 -- # set -e 00:19:16.310 11:46:46 -- nvmf/common.sh@124 -- # return 0 00:19:16.310 11:46:46 -- nvmf/common.sh@477 -- # '[' -n 3765801 ']' 00:19:16.310 11:46:46 -- nvmf/common.sh@478 -- # killprocess 3765801 00:19:16.310 11:46:46 -- common/autotest_common.sh@936 -- # '[' -z 3765801 ']' 00:19:16.310 11:46:46 -- common/autotest_common.sh@940 -- # kill -0 3765801 00:19:16.310 11:46:46 -- common/autotest_common.sh@941 -- # uname 00:19:16.310 11:46:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.310 11:46:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3765801 00:19:16.310 11:46:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:16.310 11:46:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:16.310 11:46:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3765801' 00:19:16.310 killing process with pid 3765801 00:19:16.310 11:46:46 -- common/autotest_common.sh@955 -- # kill 3765801 00:19:16.310 11:46:46 -- common/autotest_common.sh@960 -- # wait 3765801 00:19:16.570 11:46:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.570 11:46:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:16.570 00:19:16.570 real 0m8.213s 00:19:16.570 user 0m3.336s 00:19:16.570 sys 0m5.520s 00:19:16.570 11:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:16.570 11:46:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.570 ************************************ 00:19:16.570 END TEST nvmf_zcopy 00:19:16.570 ************************************ 00:19:16.570 11:46:47 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:16.570 11:46:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:16.570 11:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.570 11:46:47 -- common/autotest_common.sh@10 -- # set +x 00:19:16.570 ************************************ 00:19:16.570 START TEST nvmf_nmic 00:19:16.570 ************************************ 00:19:16.570 11:46:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:16.570 * Looking for test storage... 00:19:16.570 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:16.570 11:46:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:16.570 11:46:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:16.570 11:46:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:16.570 11:46:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:16.570 11:46:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:16.570 11:46:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:16.570 11:46:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:16.570 11:46:47 -- scripts/common.sh@335 -- # IFS=.-: 00:19:16.570 11:46:47 -- scripts/common.sh@335 -- # read -ra ver1 00:19:16.570 11:46:47 -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.570 11:46:47 -- scripts/common.sh@336 -- # read -ra ver2 00:19:16.570 11:46:47 -- scripts/common.sh@337 -- # local 'op=<' 00:19:16.570 11:46:47 -- scripts/common.sh@339 -- # ver1_l=2 00:19:16.570 11:46:47 -- scripts/common.sh@340 -- # ver2_l=1 00:19:16.570 11:46:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:16.570 11:46:47 -- scripts/common.sh@343 -- # case "$op" in 00:19:16.570 11:46:47 -- scripts/common.sh@344 -- # : 1 00:19:16.570 11:46:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:16.570 11:46:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.570 11:46:47 -- scripts/common.sh@364 -- # decimal 1 00:19:16.570 11:46:47 -- scripts/common.sh@352 -- # local d=1 00:19:16.570 11:46:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.570 11:46:47 -- scripts/common.sh@354 -- # echo 1 00:19:16.570 11:46:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:16.570 11:46:47 -- scripts/common.sh@365 -- # decimal 2 00:19:16.831 11:46:47 -- scripts/common.sh@352 -- # local d=2 00:19:16.831 11:46:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.831 11:46:47 -- scripts/common.sh@354 -- # echo 2 00:19:16.831 11:46:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:16.831 11:46:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:16.831 11:46:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:16.831 11:46:47 -- scripts/common.sh@367 -- # return 0 00:19:16.831 11:46:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.831 11:46:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 11:46:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 11:46:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 11:46:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.831 --rc genhtml_branch_coverage=1 00:19:16.831 --rc genhtml_function_coverage=1 00:19:16.831 --rc genhtml_legend=1 00:19:16.831 --rc geninfo_all_blocks=1 00:19:16.831 --rc geninfo_unexecuted_blocks=1 00:19:16.831 00:19:16.831 ' 00:19:16.831 11:46:47 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.831 11:46:47 -- nvmf/common.sh@7 -- # uname -s 00:19:16.831 11:46:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.831 11:46:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.831 11:46:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.831 11:46:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.831 11:46:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.831 11:46:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.831 11:46:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.831 11:46:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.831 11:46:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.831 11:46:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.831 11:46:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.831 11:46:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:16.831 11:46:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.831 11:46:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.831 11:46:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.831 11:46:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:16.831 11:46:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.831 11:46:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.831 11:46:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.831 11:46:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.831 11:46:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.831 11:46:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.831 11:46:47 -- paths/export.sh@5 -- # export PATH 00:19:16.831 11:46:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.831 11:46:47 -- nvmf/common.sh@46 -- # : 0 00:19:16.831 11:46:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:16.831 11:46:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:16.831 11:46:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:16.831 11:46:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.831 11:46:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.831 11:46:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:16.831 11:46:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:16.831 11:46:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:16.831 11:46:47 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.831 11:46:47 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.831 11:46:47 -- target/nmic.sh@14 -- # nvmftestinit 00:19:16.831 11:46:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:16.831 11:46:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.831 11:46:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:16.831 11:46:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:16.831 11:46:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:16.831 11:46:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.831 11:46:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.831 11:46:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.831 11:46:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:16.831 11:46:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:16.831 11:46:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:16.831 11:46:47 -- common/autotest_common.sh@10 -- # set +x 00:19:23.397 11:46:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.397 11:46:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:23.397 11:46:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:23.397 11:46:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:23.397 11:46:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:23.397 11:46:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:23.397 11:46:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:23.397 11:46:53 -- nvmf/common.sh@294 -- # net_devs=() 00:19:23.397 11:46:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:23.397 11:46:53 -- nvmf/common.sh@295 -- # e810=() 00:19:23.397 11:46:53 -- nvmf/common.sh@295 -- # local -ga e810 00:19:23.397 11:46:53 -- nvmf/common.sh@296 -- # x722=() 00:19:23.397 11:46:53 -- nvmf/common.sh@296 -- # local -ga x722 00:19:23.397 11:46:53 -- nvmf/common.sh@297 -- # mlx=() 00:19:23.397 11:46:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:23.397 11:46:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.397 11:46:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.398 11:46:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.398 11:46:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.398 11:46:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:23.398 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:23.398 11:46:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.398 11:46:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:23.398 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:23.398 11:46:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.398 11:46:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.398 11:46:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.398 11:46:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:23.398 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.398 11:46:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.398 11:46:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:23.398 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.398 11:46:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:23.398 11:46:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:23.398 11:46:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:23.398 11:46:53 -- nvmf/common.sh@57 -- # uname 00:19:23.398 11:46:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:23.398 11:46:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:23.398 11:46:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:23.398 11:46:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:23.398 11:46:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:23.398 11:46:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:23.398 11:46:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:23.398 11:46:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:23.398 11:46:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:23.398 11:46:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.398 11:46:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:23.398 11:46:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.398 11:46:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:23.398 11:46:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:23.398 11:46:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.398 11:46:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.398 11:46:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.398 11:46:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:23.398 11:46:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.398 11:46:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:23.398 11:46:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:23.398 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.398 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:23.398 altname enp217s0f0np0 00:19:23.398 altname ens818f0np0 00:19:23.398 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.398 valid_lft forever preferred_lft forever 00:19:23.398 11:46:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:23.398 11:46:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.398 11:46:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:23.398 11:46:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:23.398 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.398 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:23.398 altname enp217s0f1np1 00:19:23.398 altname ens818f1np1 00:19:23.398 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.398 valid_lft forever preferred_lft forever 00:19:23.398 11:46:53 -- nvmf/common.sh@410 -- # return 0 00:19:23.398 11:46:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.398 11:46:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.398 11:46:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:23.398 11:46:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:23.398 11:46:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.398 11:46:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:23.398 11:46:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:23.398 11:46:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.398 11:46:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:23.398 11:46:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.398 11:46:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.398 11:46:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.398 11:46:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.398 11:46:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:23.398 11:46:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.398 11:46:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:23.398 11:46:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.398 11:46:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.398 11:46:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.398 192.168.100.9' 00:19:23.398 11:46:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:23.398 192.168.100.9' 00:19:23.398 11:46:53 -- nvmf/common.sh@445 -- # head -n 1 00:19:23.398 11:46:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.398 11:46:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:23.398 192.168.100.9' 00:19:23.398 11:46:53 -- nvmf/common.sh@446 -- # tail -n +2 00:19:23.398 11:46:53 -- nvmf/common.sh@446 -- # head -n 1 00:19:23.398 11:46:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.398 11:46:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:23.399 11:46:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.399 11:46:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:23.399 11:46:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:23.399 11:46:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:23.399 11:46:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:23.399 11:46:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.399 11:46:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.399 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.399 11:46:53 -- nvmf/common.sh@469 -- # nvmfpid=3769341 00:19:23.399 11:46:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.399 11:46:53 -- nvmf/common.sh@470 -- # waitforlisten 3769341 00:19:23.399 11:46:53 -- common/autotest_common.sh@829 -- # '[' -z 3769341 ']' 00:19:23.399 11:46:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.399 11:46:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.399 11:46:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.399 11:46:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.399 11:46:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.399 [2024-12-03 11:46:53.934774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:23.399 [2024-12-03 11:46:53.934822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.399 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.399 [2024-12-03 11:46:54.003621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.658 [2024-12-03 11:46:54.077563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.658 [2024-12-03 11:46:54.077681] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.658 [2024-12-03 11:46:54.077691] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.658 [2024-12-03 11:46:54.077699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.658 [2024-12-03 11:46:54.077742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.658 [2024-12-03 11:46:54.077841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.658 [2024-12-03 11:46:54.077924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.658 [2024-12-03 11:46:54.077926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.225 11:46:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.226 11:46:54 -- common/autotest_common.sh@862 -- # return 0 00:19:24.226 11:46:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.226 11:46:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.226 11:46:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.226 11:46:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.226 11:46:54 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:24.226 11:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.226 11:46:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.226 [2024-12-03 11:46:54.833865] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfec090/0xff0580) succeed. 00:19:24.485 [2024-12-03 11:46:54.843220] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfed680/0x1031c20) succeed. 00:19:24.485 11:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:54 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.485 11:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 Malloc0 00:19:24.485 11:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:54 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:24.485 11:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 11:46:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:54 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.485 11:46:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 11:46:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:24.485 11:46:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 [2024-12-03 11:46:55.013982] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.485 11:46:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:24.485 test case1: single bdev can't be used in multiple subsystems 00:19:24.485 11:46:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:24.485 11:46:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 11:46:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:24.485 11:46:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 11:46:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 11:46:55 -- target/nmic.sh@28 -- # nmic_status=0 00:19:24.485 11:46:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:24.485 11:46:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 11:46:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 [2024-12-03 11:46:55.037719] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:24.485 [2024-12-03 11:46:55.037740] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:24.485 [2024-12-03 11:46:55.037750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.485 request: 00:19:24.485 { 00:19:24.485 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:24.486 "namespace": { 00:19:24.486 "bdev_name": "Malloc0" 00:19:24.486 }, 00:19:24.486 "method": "nvmf_subsystem_add_ns", 00:19:24.486 "req_id": 1 00:19:24.486 } 00:19:24.486 Got JSON-RPC error response 00:19:24.486 response: 00:19:24.486 { 00:19:24.486 "code": -32602, 00:19:24.486 "message": "Invalid parameters" 00:19:24.486 } 00:19:24.486 11:46:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:24.486 11:46:55 -- target/nmic.sh@29 -- # nmic_status=1 00:19:24.486 11:46:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:24.486 11:46:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:24.486 Adding namespace failed - expected result. 00:19:24.486 11:46:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:24.486 test case2: host connect to nvmf target in multiple paths 00:19:24.486 11:46:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:24.486 11:46:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.486 11:46:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.486 [2024-12-03 11:46:55.049788] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:24.486 11:46:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.486 11:46:55 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:25.420 11:46:56 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:26.799 11:46:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:26.799 11:46:57 -- common/autotest_common.sh@1187 -- # local i=0 00:19:26.799 11:46:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.799 11:46:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:26.799 11:46:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:28.727 11:46:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:28.727 11:46:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:28.727 11:46:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.727 11:46:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:28.727 11:46:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.727 11:46:59 -- common/autotest_common.sh@1197 -- # return 0 00:19:28.727 11:46:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:28.727 [global] 00:19:28.727 thread=1 00:19:28.727 invalidate=1 00:19:28.727 rw=write 00:19:28.727 time_based=1 00:19:28.727 runtime=1 00:19:28.727 ioengine=libaio 00:19:28.727 direct=1 00:19:28.727 bs=4096 00:19:28.727 iodepth=1 00:19:28.727 norandommap=0 00:19:28.727 numjobs=1 00:19:28.727 00:19:28.727 verify_dump=1 00:19:28.727 verify_backlog=512 00:19:28.727 verify_state_save=0 00:19:28.727 do_verify=1 00:19:28.727 verify=crc32c-intel 00:19:28.727 [job0] 00:19:28.727 filename=/dev/nvme0n1 00:19:28.727 Could not set queue depth (nvme0n1) 00:19:28.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.988 fio-3.35 00:19:28.988 Starting 1 thread 00:19:29.917 00:19:29.918 job0: (groupid=0, jobs=1): err= 0: pid=3770518: Tue Dec 3 11:47:00 2024 00:19:29.918 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:19:29.918 slat (nsec): min=8292, max=37439, avg=8924.48, stdev=1123.13 00:19:29.918 clat (usec): min=25, max=208, avg=58.09, stdev= 5.04 00:19:29.918 lat (usec): min=57, max=217, avg=67.02, stdev= 5.14 00:19:29.918 clat percentiles (usec): 00:19:29.918 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:19:29.918 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:19:29.918 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 63], 95.00th=[ 65], 00:19:29.918 | 99.00th=[ 69], 99.50th=[ 72], 99.90th=[ 92], 99.95th=[ 157], 00:19:29.918 | 99.99th=[ 208] 00:19:29.918 write: IOPS=7169, BW=28.0MiB/s (29.4MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:29.918 slat (nsec): min=8496, max=33507, avg=11465.09, stdev=1121.78 00:19:29.918 clat (usec): min=36, max=151, avg=55.67, stdev= 4.00 00:19:29.918 lat (usec): min=54, max=162, avg=67.14, stdev= 4.20 00:19:29.918 clat percentiles (usec): 00:19:29.918 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 51], 20.00th=[ 53], 00:19:29.918 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:19:29.918 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 62], 00:19:29.918 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 77], 99.95th=[ 90], 00:19:29.918 | 99.99th=[ 151] 00:19:29.918 bw ( KiB/s): min=29376, max=29376, per=100.00%, avg=29376.00, stdev= 0.00, samples=1 00:19:29.918 iops : min= 7344, max= 7344, avg=7344.00, stdev= 0.00, samples=1 00:19:29.918 lat (usec) : 50=2.32%, 100=97.63%, 250=0.05% 00:19:29.918 cpu : usr=10.70%, sys=18.50%, ctx=14345, majf=0, minf=1 00:19:29.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.918 issued rwts: total=7168,7177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.918 00:19:29.918 Run status group 0 (all jobs): 00:19:29.918 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:29.918 WRITE: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:29.918 00:19:29.918 Disk stats (read/write): 00:19:29.918 nvme0n1: ios=6302/6656, merge=0/0, ticks=316/314, in_queue=630, util=90.58% 00:19:30.173 11:47:00 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:32.067 11:47:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.067 11:47:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.067 11:47:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.067 11:47:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.067 11:47:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.067 11:47:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.067 11:47:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.067 11:47:02 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:32.067 11:47:02 -- target/nmic.sh@53 -- # nvmftestfini 00:19:32.067 11:47:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.067 11:47:02 -- nvmf/common.sh@116 -- # sync 00:19:32.067 11:47:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:32.067 11:47:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:32.067 11:47:02 -- nvmf/common.sh@119 -- # set +e 00:19:32.067 11:47:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.067 11:47:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:32.067 rmmod nvme_rdma 00:19:32.067 rmmod nvme_fabrics 00:19:32.067 11:47:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.067 11:47:02 -- nvmf/common.sh@123 -- # set -e 00:19:32.067 11:47:02 -- nvmf/common.sh@124 -- # return 0 00:19:32.067 11:47:02 -- nvmf/common.sh@477 -- # '[' -n 3769341 ']' 00:19:32.067 11:47:02 -- nvmf/common.sh@478 -- # killprocess 3769341 00:19:32.068 11:47:02 -- common/autotest_common.sh@936 -- # '[' -z 3769341 ']' 00:19:32.068 11:47:02 -- common/autotest_common.sh@940 -- # kill -0 3769341 00:19:32.068 11:47:02 -- common/autotest_common.sh@941 -- # uname 00:19:32.068 11:47:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.068 11:47:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3769341 00:19:32.068 11:47:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.068 11:47:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.068 11:47:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3769341' 00:19:32.068 killing process with pid 3769341 00:19:32.068 11:47:02 -- common/autotest_common.sh@955 -- # kill 3769341 00:19:32.068 11:47:02 -- common/autotest_common.sh@960 -- # wait 3769341 00:19:32.325 11:47:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:32.325 11:47:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:32.325 00:19:32.325 real 0m15.905s 00:19:32.325 user 0m44.823s 00:19:32.325 sys 0m6.151s 00:19:32.325 11:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:32.325 11:47:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.325 ************************************ 00:19:32.325 END TEST nvmf_nmic 00:19:32.325 ************************************ 00:19:32.583 11:47:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:32.583 11:47:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:32.583 11:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:32.583 11:47:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.583 ************************************ 00:19:32.583 START TEST nvmf_fio_target 00:19:32.583 ************************************ 00:19:32.583 11:47:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:32.583 * Looking for test storage... 00:19:32.583 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:32.583 11:47:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:32.583 11:47:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:32.583 11:47:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:32.583 11:47:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:32.583 11:47:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:32.583 11:47:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:32.583 11:47:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:32.583 11:47:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:32.583 11:47:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:32.583 11:47:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.583 11:47:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:32.583 11:47:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:32.583 11:47:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:32.583 11:47:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:32.583 11:47:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:32.583 11:47:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:32.583 11:47:03 -- scripts/common.sh@344 -- # : 1 00:19:32.583 11:47:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:32.583 11:47:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.583 11:47:03 -- scripts/common.sh@364 -- # decimal 1 00:19:32.583 11:47:03 -- scripts/common.sh@352 -- # local d=1 00:19:32.583 11:47:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.583 11:47:03 -- scripts/common.sh@354 -- # echo 1 00:19:32.583 11:47:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:32.583 11:47:03 -- scripts/common.sh@365 -- # decimal 2 00:19:32.583 11:47:03 -- scripts/common.sh@352 -- # local d=2 00:19:32.583 11:47:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.583 11:47:03 -- scripts/common.sh@354 -- # echo 2 00:19:32.583 11:47:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:32.583 11:47:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:32.583 11:47:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:32.583 11:47:03 -- scripts/common.sh@367 -- # return 0 00:19:32.583 11:47:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.583 11:47:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 11:47:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 11:47:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 11:47:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.583 --rc genhtml_branch_coverage=1 00:19:32.583 --rc genhtml_function_coverage=1 00:19:32.583 --rc genhtml_legend=1 00:19:32.583 --rc geninfo_all_blocks=1 00:19:32.583 --rc geninfo_unexecuted_blocks=1 00:19:32.583 00:19:32.583 ' 00:19:32.583 11:47:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.583 11:47:03 -- nvmf/common.sh@7 -- # uname -s 00:19:32.583 11:47:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.583 11:47:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.583 11:47:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.584 11:47:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.584 11:47:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.584 11:47:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.584 11:47:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.584 11:47:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.584 11:47:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.584 11:47:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.584 11:47:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:32.584 11:47:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:32.584 11:47:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.584 11:47:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.584 11:47:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.584 11:47:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:32.584 11:47:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.584 11:47:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.584 11:47:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.584 11:47:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 11:47:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 11:47:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 11:47:03 -- paths/export.sh@5 -- # export PATH 00:19:32.584 11:47:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.584 11:47:03 -- nvmf/common.sh@46 -- # : 0 00:19:32.584 11:47:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:32.584 11:47:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:32.584 11:47:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:32.584 11:47:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.584 11:47:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.584 11:47:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:32.584 11:47:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:32.584 11:47:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:32.584 11:47:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.584 11:47:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.584 11:47:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:32.584 11:47:03 -- target/fio.sh@16 -- # nvmftestinit 00:19:32.584 11:47:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:32.584 11:47:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.584 11:47:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:32.584 11:47:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:32.584 11:47:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:32.584 11:47:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.584 11:47:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.584 11:47:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.584 11:47:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:32.584 11:47:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:32.584 11:47:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:32.584 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:40.687 11:47:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:40.687 11:47:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:40.687 11:47:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:40.687 11:47:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:40.687 11:47:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:40.687 11:47:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:40.687 11:47:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:40.687 11:47:09 -- nvmf/common.sh@294 -- # net_devs=() 00:19:40.687 11:47:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:40.687 11:47:09 -- nvmf/common.sh@295 -- # e810=() 00:19:40.687 11:47:09 -- nvmf/common.sh@295 -- # local -ga e810 00:19:40.687 11:47:09 -- nvmf/common.sh@296 -- # x722=() 00:19:40.687 11:47:09 -- nvmf/common.sh@296 -- # local -ga x722 00:19:40.687 11:47:09 -- nvmf/common.sh@297 -- # mlx=() 00:19:40.687 11:47:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:40.687 11:47:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.687 11:47:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:40.687 11:47:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.687 11:47:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:40.687 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:40.687 11:47:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.687 11:47:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.687 11:47:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:40.687 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:40.687 11:47:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.687 11:47:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:40.687 11:47:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:40.687 11:47:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.687 11:47:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.687 11:47:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.687 11:47:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.687 11:47:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:40.687 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:40.687 11:47:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.687 11:47:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.687 11:47:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.687 11:47:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.687 11:47:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:40.687 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:40.687 11:47:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.687 11:47:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:40.687 11:47:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:40.688 11:47:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:40.688 11:47:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:40.688 11:47:09 -- nvmf/common.sh@57 -- # uname 00:19:40.688 11:47:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:40.688 11:47:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:40.688 11:47:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:40.688 11:47:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:40.688 11:47:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:40.688 11:47:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:40.688 11:47:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:40.688 11:47:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:40.688 11:47:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:40.688 11:47:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:40.688 11:47:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:40.688 11:47:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.688 11:47:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:40.688 11:47:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:40.688 11:47:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.688 11:47:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:40.688 11:47:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@104 -- # continue 2 00:19:40.688 11:47:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@104 -- # continue 2 00:19:40.688 11:47:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:40.688 11:47:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.688 11:47:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:40.688 11:47:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:40.688 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.688 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:40.688 altname enp217s0f0np0 00:19:40.688 altname ens818f0np0 00:19:40.688 inet 192.168.100.8/24 scope global mlx_0_0 00:19:40.688 valid_lft forever preferred_lft forever 00:19:40.688 11:47:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:40.688 11:47:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.688 11:47:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:40.688 11:47:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:40.688 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.688 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:40.688 altname enp217s0f1np1 00:19:40.688 altname ens818f1np1 00:19:40.688 inet 192.168.100.9/24 scope global mlx_0_1 00:19:40.688 valid_lft forever preferred_lft forever 00:19:40.688 11:47:09 -- nvmf/common.sh@410 -- # return 0 00:19:40.688 11:47:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:40.688 11:47:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:40.688 11:47:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:40.688 11:47:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:40.688 11:47:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.688 11:47:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:40.688 11:47:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:40.688 11:47:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.688 11:47:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:40.688 11:47:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@104 -- # continue 2 00:19:40.688 11:47:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.688 11:47:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.688 11:47:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@104 -- # continue 2 00:19:40.688 11:47:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:40.688 11:47:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.688 11:47:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:40.688 11:47:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:40.688 11:47:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:40.688 11:47:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:40.688 192.168.100.9' 00:19:40.688 11:47:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:40.688 192.168.100.9' 00:19:40.688 11:47:09 -- nvmf/common.sh@445 -- # head -n 1 00:19:40.688 11:47:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:40.688 11:47:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:40.688 192.168.100.9' 00:19:40.688 11:47:10 -- nvmf/common.sh@446 -- # tail -n +2 00:19:40.688 11:47:10 -- nvmf/common.sh@446 -- # head -n 1 00:19:40.688 11:47:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:40.688 11:47:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:40.688 11:47:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:40.688 11:47:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:40.688 11:47:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:40.688 11:47:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:40.688 11:47:10 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:40.688 11:47:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:40.688 11:47:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.688 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 11:47:10 -- nvmf/common.sh@469 -- # nvmfpid=3774409 00:19:40.688 11:47:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.688 11:47:10 -- nvmf/common.sh@470 -- # waitforlisten 3774409 00:19:40.688 11:47:10 -- common/autotest_common.sh@829 -- # '[' -z 3774409 ']' 00:19:40.688 11:47:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.688 11:47:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.688 11:47:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.688 11:47:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.688 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 [2024-12-03 11:47:10.100075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.688 [2024-12-03 11:47:10.100137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.688 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.688 [2024-12-03 11:47:10.172049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.688 [2024-12-03 11:47:10.242175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:40.688 [2024-12-03 11:47:10.242298] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.688 [2024-12-03 11:47:10.242308] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.688 [2024-12-03 11:47:10.242316] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.688 [2024-12-03 11:47:10.242365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.688 [2024-12-03 11:47:10.242475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.688 [2024-12-03 11:47:10.242546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.688 [2024-12-03 11:47:10.242548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.688 11:47:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.688 11:47:10 -- common/autotest_common.sh@862 -- # return 0 00:19:40.688 11:47:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:40.688 11:47:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.688 11:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 11:47:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.688 11:47:10 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:40.688 [2024-12-03 11:47:11.153113] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x90c090/0x910580) succeed. 00:19:40.688 [2024-12-03 11:47:11.162430] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x90d680/0x951c20) succeed. 00:19:40.944 11:47:11 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.944 11:47:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:40.944 11:47:11 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.199 11:47:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:41.199 11:47:11 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.455 11:47:11 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:41.455 11:47:11 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.711 11:47:12 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:41.711 11:47:12 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:41.711 11:47:12 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:41.968 11:47:12 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:41.968 11:47:12 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.225 11:47:12 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:42.225 11:47:12 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.482 11:47:12 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:42.482 11:47:12 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:42.740 11:47:13 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:42.740 11:47:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:42.740 11:47:13 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.997 11:47:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:42.997 11:47:13 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:43.254 11:47:13 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:43.254 [2024-12-03 11:47:13.825014] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:43.254 11:47:13 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:43.511 11:47:14 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:43.769 11:47:14 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:44.700 11:47:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:44.700 11:47:15 -- common/autotest_common.sh@1187 -- # local i=0 00:19:44.700 11:47:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.700 11:47:15 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:19:44.700 11:47:15 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:19:44.700 11:47:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:47.218 11:47:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:47.218 11:47:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:47.218 11:47:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:47.218 11:47:17 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:19:47.218 11:47:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:47.218 11:47:17 -- common/autotest_common.sh@1197 -- # return 0 00:19:47.218 11:47:17 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:47.218 [global] 00:19:47.218 thread=1 00:19:47.218 invalidate=1 00:19:47.218 rw=write 00:19:47.218 time_based=1 00:19:47.218 runtime=1 00:19:47.218 ioengine=libaio 00:19:47.218 direct=1 00:19:47.218 bs=4096 00:19:47.218 iodepth=1 00:19:47.218 norandommap=0 00:19:47.218 numjobs=1 00:19:47.218 00:19:47.218 verify_dump=1 00:19:47.218 verify_backlog=512 00:19:47.218 verify_state_save=0 00:19:47.218 do_verify=1 00:19:47.218 verify=crc32c-intel 00:19:47.218 [job0] 00:19:47.218 filename=/dev/nvme0n1 00:19:47.218 [job1] 00:19:47.218 filename=/dev/nvme0n2 00:19:47.218 [job2] 00:19:47.218 filename=/dev/nvme0n3 00:19:47.218 [job3] 00:19:47.218 filename=/dev/nvme0n4 00:19:47.218 Could not set queue depth (nvme0n1) 00:19:47.218 Could not set queue depth (nvme0n2) 00:19:47.218 Could not set queue depth (nvme0n3) 00:19:47.218 Could not set queue depth (nvme0n4) 00:19:47.218 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.218 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.218 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.218 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.218 fio-3.35 00:19:47.218 Starting 4 threads 00:19:48.613 00:19:48.613 job0: (groupid=0, jobs=1): err= 0: pid=3775832: Tue Dec 3 11:47:18 2024 00:19:48.613 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:48.613 slat (nsec): min=8344, max=25342, avg=8945.16, stdev=841.83 00:19:48.613 clat (usec): min=65, max=143, avg=85.05, stdev=10.45 00:19:48.613 lat (usec): min=74, max=152, avg=94.00, stdev=10.51 00:19:48.613 clat percentiles (usec): 00:19:48.613 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 00:19:48.613 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:19:48.613 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 111], 00:19:48.613 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 139], 00:19:48.613 | 99.99th=[ 145] 00:19:48.613 write: IOPS=5273, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:19:48.613 slat (nsec): min=10515, max=39500, avg=11530.37, stdev=1137.13 00:19:48.613 clat (usec): min=55, max=138, avg=81.57, stdev=10.13 00:19:48.613 lat (usec): min=74, max=149, avg=93.10, stdev=10.20 00:19:48.613 clat percentiles (usec): 00:19:48.613 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:19:48.613 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:19:48.613 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 93], 95.00th=[ 108], 00:19:48.613 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 128], 00:19:48.613 | 99.99th=[ 139] 00:19:48.613 bw ( KiB/s): min=22112, max=22112, per=29.76%, avg=22112.00, stdev= 0.00, samples=1 00:19:48.613 iops : min= 5528, max= 5528, avg=5528.00, stdev= 0.00, samples=1 00:19:48.613 lat (usec) : 100=91.75%, 250=8.25% 00:19:48.613 cpu : usr=8.70%, sys=13.40%, ctx=10399, majf=0, minf=1 00:19:48.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.613 issued rwts: total=5120,5279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.613 job1: (groupid=0, jobs=1): err= 0: pid=3775839: Tue Dec 3 11:47:18 2024 00:19:48.613 read: IOPS=3761, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec) 00:19:48.613 slat (nsec): min=8260, max=25439, avg=8914.18, stdev=829.03 00:19:48.613 clat (usec): min=71, max=186, avg=118.95, stdev=11.02 00:19:48.613 lat (usec): min=80, max=195, avg=127.86, stdev=11.02 00:19:48.613 clat percentiles (usec): 00:19:48.613 | 1.00th=[ 90], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 112], 00:19:48.613 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:19:48.613 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 135], 00:19:48.613 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 180], 00:19:48.613 | 99.99th=[ 188] 00:19:48.613 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:48.613 slat (nsec): min=10269, max=46610, avg=11390.31, stdev=1355.61 00:19:48.613 clat (usec): min=61, max=179, avg=110.54, stdev=11.03 00:19:48.613 lat (usec): min=71, max=193, avg=121.93, stdev=11.06 00:19:48.613 clat percentiles (usec): 00:19:48.613 | 1.00th=[ 82], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 103], 00:19:48.613 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:19:48.613 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 127], 00:19:48.613 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 174], 00:19:48.613 | 99.99th=[ 180] 00:19:48.613 bw ( KiB/s): min=16384, max=16384, per=22.05%, avg=16384.00, stdev= 0.00, samples=1 00:19:48.613 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:48.613 lat (usec) : 100=8.45%, 250=91.55% 00:19:48.613 cpu : usr=6.80%, sys=9.80%, ctx=7862, majf=0, minf=1 00:19:48.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.613 issued rwts: total=3765,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.614 job2: (groupid=0, jobs=1): err= 0: pid=3775858: Tue Dec 3 11:47:18 2024 00:19:48.614 read: IOPS=3761, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec) 00:19:48.614 slat (nsec): min=8549, max=40346, avg=9152.95, stdev=978.12 00:19:48.614 clat (usec): min=78, max=176, avg=118.66, stdev= 9.93 00:19:48.614 lat (usec): min=87, max=186, avg=127.81, stdev= 9.92 00:19:48.614 clat percentiles (usec): 00:19:48.614 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:19:48.614 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:19:48.614 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 135], 00:19:48.614 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 172], 00:19:48.614 | 99.99th=[ 178] 00:19:48.614 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:48.614 slat (nsec): min=10445, max=43527, avg=11551.19, stdev=1079.63 00:19:48.614 clat (usec): min=72, max=159, avg=110.44, stdev= 9.43 00:19:48.614 lat (usec): min=83, max=190, avg=121.99, stdev= 9.47 00:19:48.614 clat percentiles (usec): 00:19:48.614 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 104], 00:19:48.614 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:19:48.614 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 126], 00:19:48.614 | 99.00th=[ 143], 99.50th=[ 147], 99.90th=[ 157], 99.95th=[ 159], 00:19:48.614 | 99.99th=[ 159] 00:19:48.614 bw ( KiB/s): min=16384, max=16384, per=22.05%, avg=16384.00, stdev= 0.00, samples=1 00:19:48.614 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:48.614 lat (usec) : 100=6.88%, 250=93.12% 00:19:48.614 cpu : usr=6.30%, sys=10.50%, ctx=7862, majf=0, minf=1 00:19:48.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.614 issued rwts: total=3765,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.614 job3: (groupid=0, jobs=1): err= 0: pid=3775864: Tue Dec 3 11:47:18 2024 00:19:48.614 read: IOPS=4903, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:19:48.614 slat (nsec): min=8585, max=46557, avg=9297.40, stdev=1125.87 00:19:48.614 clat (usec): min=70, max=150, avg=88.49, stdev= 9.42 00:19:48.614 lat (usec): min=80, max=166, avg=97.78, stdev= 9.54 00:19:48.614 clat percentiles (usec): 00:19:48.614 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:19:48.614 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 88], 00:19:48.614 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 100], 95.00th=[ 111], 00:19:48.614 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 135], 99.95th=[ 143], 00:19:48.614 | 99.99th=[ 151] 00:19:48.614 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:48.614 slat (nsec): min=10778, max=44692, avg=11997.54, stdev=1624.50 00:19:48.614 clat (usec): min=62, max=168, avg=84.39, stdev= 9.81 00:19:48.614 lat (usec): min=78, max=188, avg=96.38, stdev=10.09 00:19:48.614 clat percentiles (usec): 00:19:48.614 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 00:19:48.614 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 84], 00:19:48.614 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 98], 95.00th=[ 108], 00:19:48.614 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 137], 00:19:48.614 | 99.99th=[ 169] 00:19:48.614 bw ( KiB/s): min=20720, max=20720, per=27.89%, avg=20720.00, stdev= 0.00, samples=1 00:19:48.614 iops : min= 5180, max= 5180, avg=5180.00, stdev= 0.00, samples=1 00:19:48.614 lat (usec) : 100=90.54%, 250=9.46% 00:19:48.614 cpu : usr=8.40%, sys=11.90%, ctx=10032, majf=0, minf=1 00:19:48.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.614 issued rwts: total=4908,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.614 00:19:48.614 Run status group 0 (all jobs): 00:19:48.614 READ: bw=68.5MiB/s (71.8MB/s), 14.7MiB/s-20.0MiB/s (15.4MB/s-20.9MB/s), io=68.6MiB (71.9MB), run=1001-1001msec 00:19:48.614 WRITE: bw=72.5MiB/s (76.1MB/s), 16.0MiB/s-20.6MiB/s (16.8MB/s-21.6MB/s), io=72.6MiB (76.1MB), run=1001-1001msec 00:19:48.614 00:19:48.614 Disk stats (read/write): 00:19:48.614 nvme0n1: ios=4321/4608, merge=0/0, ticks=327/338, in_queue=665, util=84.17% 00:19:48.614 nvme0n2: ios=3072/3476, merge=0/0, ticks=342/350, in_queue=692, util=85.29% 00:19:48.614 nvme0n3: ios=3072/3477, merge=0/0, ticks=337/370, in_queue=707, util=88.36% 00:19:48.614 nvme0n4: ios=4096/4465, merge=0/0, ticks=316/315, in_queue=631, util=89.50% 00:19:48.614 11:47:18 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:48.614 [global] 00:19:48.614 thread=1 00:19:48.614 invalidate=1 00:19:48.614 rw=randwrite 00:19:48.614 time_based=1 00:19:48.614 runtime=1 00:19:48.614 ioengine=libaio 00:19:48.614 direct=1 00:19:48.614 bs=4096 00:19:48.614 iodepth=1 00:19:48.614 norandommap=0 00:19:48.614 numjobs=1 00:19:48.614 00:19:48.614 verify_dump=1 00:19:48.614 verify_backlog=512 00:19:48.614 verify_state_save=0 00:19:48.614 do_verify=1 00:19:48.614 verify=crc32c-intel 00:19:48.614 [job0] 00:19:48.614 filename=/dev/nvme0n1 00:19:48.614 [job1] 00:19:48.614 filename=/dev/nvme0n2 00:19:48.614 [job2] 00:19:48.614 filename=/dev/nvme0n3 00:19:48.614 [job3] 00:19:48.614 filename=/dev/nvme0n4 00:19:48.614 Could not set queue depth (nvme0n1) 00:19:48.614 Could not set queue depth (nvme0n2) 00:19:48.614 Could not set queue depth (nvme0n3) 00:19:48.614 Could not set queue depth (nvme0n4) 00:19:48.878 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.878 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.878 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.878 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.878 fio-3.35 00:19:48.878 Starting 4 threads 00:19:50.327 00:19:50.327 job0: (groupid=0, jobs=1): err= 0: pid=3776268: Tue Dec 3 11:47:20 2024 00:19:50.327 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:50.327 slat (nsec): min=8253, max=35646, avg=8947.37, stdev=923.97 00:19:50.327 clat (usec): min=67, max=352, avg=131.09, stdev=18.99 00:19:50.327 lat (usec): min=77, max=361, avg=140.03, stdev=18.99 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 115], 20.00th=[ 123], 00:19:50.327 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:19:50.327 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 169], 00:19:50.327 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 202], 00:19:50.327 | 99.99th=[ 355] 00:19:50.327 write: IOPS=3614, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:19:50.327 slat (nsec): min=8546, max=34748, avg=10979.05, stdev=1151.88 00:19:50.327 clat (usec): min=59, max=372, avg=122.12, stdev=20.07 00:19:50.327 lat (usec): min=70, max=383, avg=133.09, stdev=20.06 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 102], 20.00th=[ 114], 00:19:50.327 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:19:50.327 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 159], 00:19:50.327 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 233], 99.95th=[ 359], 00:19:50.327 | 99.99th=[ 371] 00:19:50.327 bw ( KiB/s): min=16384, max=16384, per=22.79%, avg=16384.00, stdev= 0.00, samples=1 00:19:50.327 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:50.327 lat (usec) : 100=8.29%, 250=91.66%, 500=0.06% 00:19:50.327 cpu : usr=5.10%, sys=9.90%, ctx=7202, majf=0, minf=1 00:19:50.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 issued rwts: total=3584,3618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.327 job1: (groupid=0, jobs=1): err= 0: pid=3776274: Tue Dec 3 11:47:20 2024 00:19:50.327 read: IOPS=5343, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1001msec) 00:19:50.327 slat (nsec): min=8276, max=36549, avg=8880.02, stdev=888.23 00:19:50.327 clat (usec): min=59, max=169, avg=80.00, stdev=11.39 00:19:50.327 lat (usec): min=71, max=178, avg=88.88, stdev=11.48 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:19:50.327 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 79], 00:19:50.327 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 112], 00:19:50.327 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 137], 00:19:50.327 | 99.99th=[ 169] 00:19:50.327 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:19:50.327 slat (nsec): min=10121, max=38161, avg=11253.63, stdev=1090.33 00:19:50.327 clat (usec): min=53, max=158, avg=76.86, stdev=10.78 00:19:50.327 lat (usec): min=70, max=175, avg=88.11, stdev=10.86 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 65], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:19:50.327 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 76], 00:19:50.327 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 105], 00:19:50.327 | 99.00th=[ 114], 99.50th=[ 118], 99.90th=[ 137], 99.95th=[ 153], 00:19:50.327 | 99.99th=[ 159] 00:19:50.327 bw ( KiB/s): min=22472, max=22472, per=31.26%, avg=22472.00, stdev= 0.00, samples=1 00:19:50.327 iops : min= 5618, max= 5618, avg=5618.00, stdev= 0.00, samples=1 00:19:50.327 lat (usec) : 100=92.49%, 250=7.51% 00:19:50.327 cpu : usr=8.40%, sys=13.90%, ctx=10981, majf=0, minf=1 00:19:50.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 issued rwts: total=5349,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.327 job2: (groupid=0, jobs=1): err= 0: pid=3776293: Tue Dec 3 11:47:20 2024 00:19:50.327 read: IOPS=4898, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:19:50.327 slat (nsec): min=8514, max=35702, avg=9290.12, stdev=1255.15 00:19:50.327 clat (usec): min=70, max=187, avg=88.95, stdev=10.20 00:19:50.327 lat (usec): min=79, max=196, avg=98.24, stdev=10.26 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:19:50.327 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:19:50.327 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 113], 00:19:50.327 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 157], 99.95th=[ 176], 00:19:50.327 | 99.99th=[ 188] 00:19:50.327 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:50.327 slat (nsec): min=10392, max=38927, avg=11633.16, stdev=1392.00 00:19:50.327 clat (usec): min=67, max=132, avg=84.54, stdev= 8.92 00:19:50.327 lat (usec): min=79, max=155, avg=96.17, stdev= 9.04 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 00:19:50.327 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:19:50.327 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 98], 95.00th=[ 105], 00:19:50.327 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 123], 00:19:50.327 | 99.99th=[ 133] 00:19:50.327 bw ( KiB/s): min=20480, max=20480, per=28.49%, avg=20480.00, stdev= 0.00, samples=1 00:19:50.327 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:50.327 lat (usec) : 100=89.96%, 250=10.04% 00:19:50.327 cpu : usr=9.50%, sys=12.00%, ctx=10023, majf=0, minf=1 00:19:50.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.327 issued rwts: total=4903,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.327 job3: (groupid=0, jobs=1): err= 0: pid=3776299: Tue Dec 3 11:47:20 2024 00:19:50.327 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:50.327 slat (nsec): min=8429, max=27178, avg=9124.72, stdev=781.92 00:19:50.327 clat (usec): min=78, max=269, avg=130.87, stdev=14.93 00:19:50.327 lat (usec): min=87, max=286, avg=140.00, stdev=14.95 00:19:50.327 clat percentiles (usec): 00:19:50.327 | 1.00th=[ 89], 5.00th=[ 101], 10.00th=[ 117], 20.00th=[ 123], 00:19:50.327 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:19:50.327 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 157], 00:19:50.327 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 192], 00:19:50.327 | 99.99th=[ 269] 00:19:50.327 write: IOPS=3613, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:19:50.328 slat (nsec): min=10273, max=39228, avg=11304.08, stdev=1111.26 00:19:50.328 clat (usec): min=72, max=326, avg=121.90, stdev=15.93 00:19:50.328 lat (usec): min=85, max=337, avg=133.20, stdev=15.92 00:19:50.328 clat percentiles (usec): 00:19:50.328 | 1.00th=[ 84], 5.00th=[ 92], 10.00th=[ 108], 20.00th=[ 114], 00:19:50.328 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:19:50.328 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 151], 00:19:50.328 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 221], 99.95th=[ 289], 00:19:50.328 | 99.99th=[ 326] 00:19:50.328 bw ( KiB/s): min=16384, max=16384, per=22.79%, avg=16384.00, stdev= 0.00, samples=1 00:19:50.328 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:50.328 lat (usec) : 100=5.69%, 250=94.25%, 500=0.06% 00:19:50.328 cpu : usr=5.20%, sys=9.90%, ctx=7201, majf=0, minf=1 00:19:50.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.328 issued rwts: total=3584,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.328 00:19:50.328 Run status group 0 (all jobs): 00:19:50.328 READ: bw=68.0MiB/s (71.3MB/s), 14.0MiB/s-20.9MiB/s (14.7MB/s-21.9MB/s), io=68.0MiB (71.4MB), run=1001-1001msec 00:19:50.328 WRITE: bw=70.2MiB/s (73.6MB/s), 14.1MiB/s-22.0MiB/s (14.8MB/s-23.0MB/s), io=70.3MiB (73.7MB), run=1001-1001msec 00:19:50.328 00:19:50.328 Disk stats (read/write): 00:19:50.328 nvme0n1: ios=2985/3072, merge=0/0, ticks=367/343, in_queue=710, util=84.37% 00:19:50.328 nvme0n2: ios=4529/4608, merge=0/0, ticks=338/308, in_queue=646, util=85.32% 00:19:50.328 nvme0n3: ios=4096/4239, merge=0/0, ticks=332/319, in_queue=651, util=88.48% 00:19:50.328 nvme0n4: ios=2935/3072, merge=0/0, ticks=373/338, in_queue=711, util=89.52% 00:19:50.328 11:47:20 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:50.328 [global] 00:19:50.328 thread=1 00:19:50.328 invalidate=1 00:19:50.328 rw=write 00:19:50.328 time_based=1 00:19:50.328 runtime=1 00:19:50.328 ioengine=libaio 00:19:50.328 direct=1 00:19:50.328 bs=4096 00:19:50.328 iodepth=128 00:19:50.328 norandommap=0 00:19:50.328 numjobs=1 00:19:50.328 00:19:50.328 verify_dump=1 00:19:50.328 verify_backlog=512 00:19:50.328 verify_state_save=0 00:19:50.328 do_verify=1 00:19:50.328 verify=crc32c-intel 00:19:50.328 [job0] 00:19:50.328 filename=/dev/nvme0n1 00:19:50.328 [job1] 00:19:50.328 filename=/dev/nvme0n2 00:19:50.328 [job2] 00:19:50.328 filename=/dev/nvme0n3 00:19:50.328 [job3] 00:19:50.328 filename=/dev/nvme0n4 00:19:50.328 Could not set queue depth (nvme0n1) 00:19:50.328 Could not set queue depth (nvme0n2) 00:19:50.328 Could not set queue depth (nvme0n3) 00:19:50.328 Could not set queue depth (nvme0n4) 00:19:50.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.328 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.328 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.328 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:50.328 fio-3.35 00:19:50.328 Starting 4 threads 00:19:51.753 00:19:51.753 job0: (groupid=0, jobs=1): err= 0: pid=3776687: Tue Dec 3 11:47:22 2024 00:19:51.753 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:19:51.753 slat (usec): min=2, max=1185, avg=98.39, stdev=252.65 00:19:51.753 clat (usec): min=10267, max=16336, avg=12729.20, stdev=1331.33 00:19:51.753 lat (usec): min=10714, max=16339, avg=12827.59, stdev=1318.38 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:19:51.753 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:19:51.753 | 70.00th=[13829], 80.00th=[14484], 90.00th=[14746], 95.00th=[15008], 00:19:51.753 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15401], 99.95th=[15533], 00:19:51.753 | 99.99th=[16319] 00:19:51.753 write: IOPS=5191, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1003msec); 0 zone resets 00:19:51.753 slat (usec): min=2, max=1208, avg=92.66, stdev=238.80 00:19:51.753 clat (usec): min=1607, max=14453, avg=11842.61, stdev=1518.22 00:19:51.753 lat (usec): min=2514, max=14457, avg=11935.27, stdev=1507.82 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 7046], 5.00th=[10421], 10.00th=[10683], 20.00th=[10814], 00:19:51.753 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11469], 00:19:51.753 | 70.00th=[11863], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:19:51.753 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:19:51.753 | 99.99th=[14484] 00:19:51.753 bw ( KiB/s): min=17968, max=22992, per=21.12%, avg=20480.00, stdev=3552.50, samples=2 00:19:51.753 iops : min= 4492, max= 5748, avg=5120.00, stdev=888.13, samples=2 00:19:51.753 lat (msec) : 2=0.01%, 4=0.20%, 10=0.90%, 20=98.89% 00:19:51.753 cpu : usr=2.50%, sys=3.09%, ctx=1746, majf=0, minf=1 00:19:51.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:51.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.753 issued rwts: total=5120,5207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.753 job1: (groupid=0, jobs=1): err= 0: pid=3776697: Tue Dec 3 11:47:22 2024 00:19:51.753 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:19:51.753 slat (usec): min=2, max=1086, avg=69.96, stdev=208.17 00:19:51.753 clat (usec): min=4506, max=13100, avg=9006.09, stdev=2631.44 00:19:51.753 lat (usec): min=4513, max=13106, avg=9076.04, stdev=2646.89 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6390], 00:19:51.753 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[11207], 00:19:51.753 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12125], 95.00th=[12387], 00:19:51.753 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12911], 00:19:51.753 | 99.99th=[13042] 00:19:51.753 write: IOPS=7257, BW=28.3MiB/s (29.7MB/s)(28.4MiB/1001msec); 0 zone resets 00:19:51.753 slat (usec): min=2, max=1568, avg=65.99, stdev=195.34 00:19:51.753 clat (usec): min=514, max=12270, avg=8539.65, stdev=2544.25 00:19:51.753 lat (usec): min=1216, max=12274, avg=8605.64, stdev=2557.69 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 4146], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6128], 00:19:51.753 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6849], 60.00th=[10683], 00:19:51.753 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11469], 95.00th=[11600], 00:19:51.753 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:19:51.753 | 99.99th=[12256] 00:19:51.753 bw ( KiB/s): min=23304, max=34040, per=29.57%, avg=28672.00, stdev=7591.50, samples=2 00:19:51.753 iops : min= 5826, max= 8510, avg=7168.00, stdev=1897.87, samples=2 00:19:51.753 lat (usec) : 750=0.01% 00:19:51.753 lat (msec) : 2=0.12%, 4=0.33%, 10=51.20%, 20=48.35% 00:19:51.753 cpu : usr=3.20%, sys=4.20%, ctx=1626, majf=0, minf=1 00:19:51.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:51.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.753 issued rwts: total=7168,7265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.753 job2: (groupid=0, jobs=1): err= 0: pid=3776712: Tue Dec 3 11:47:22 2024 00:19:51.753 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:19:51.753 slat (usec): min=2, max=1214, avg=98.80, stdev=253.92 00:19:51.753 clat (usec): min=9885, max=16348, avg=12717.20, stdev=1339.50 00:19:51.753 lat (usec): min=9888, max=16351, avg=12816.00, stdev=1325.68 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:19:51.753 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:19:51.753 | 70.00th=[13829], 80.00th=[14484], 90.00th=[14746], 95.00th=[15008], 00:19:51.753 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15401], 99.95th=[16319], 00:19:51.753 | 99.99th=[16319] 00:19:51.753 write: IOPS=5173, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec); 0 zone resets 00:19:51.753 slat (usec): min=2, max=1501, avg=92.55, stdev=238.99 00:19:51.753 clat (usec): min=1619, max=15398, avg=11875.00, stdev=1478.51 00:19:51.753 lat (usec): min=2582, max=15402, avg=11967.56, stdev=1466.79 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 7963], 5.00th=[10421], 10.00th=[10683], 20.00th=[10814], 00:19:51.753 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:19:51.753 | 70.00th=[11863], 80.00th=[13698], 90.00th=[14091], 95.00th=[14091], 00:19:51.753 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14615], 99.95th=[14746], 00:19:51.753 | 99.99th=[15401] 00:19:51.753 bw ( KiB/s): min=18088, max=22872, per=21.12%, avg=20480.00, stdev=3382.80, samples=2 00:19:51.753 iops : min= 4522, max= 5718, avg=5120.00, stdev=845.70, samples=2 00:19:51.753 lat (msec) : 2=0.01%, 4=0.11%, 10=0.69%, 20=99.19% 00:19:51.753 cpu : usr=2.50%, sys=3.29%, ctx=2049, majf=0, minf=1 00:19:51.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:51.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.753 issued rwts: total=5120,5189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.753 job3: (groupid=0, jobs=1): err= 0: pid=3776718: Tue Dec 3 11:47:22 2024 00:19:51.753 read: IOPS=6496, BW=25.4MiB/s (26.6MB/s)(25.4MiB/1002msec) 00:19:51.753 slat (usec): min=2, max=1128, avg=77.25, stdev=224.79 00:19:51.753 clat (usec): min=502, max=12643, avg=9847.30, stdev=2116.35 00:19:51.753 lat (usec): min=1413, max=12654, avg=9924.56, stdev=2121.85 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 5211], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7832], 00:19:51.753 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[10945], 60.00th=[11338], 00:19:51.753 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12125], 95.00th=[12387], 00:19:51.753 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:19:51.753 | 99.99th=[12649] 00:19:51.753 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:19:51.753 slat (usec): min=2, max=1576, avg=72.16, stdev=210.71 00:19:51.753 clat (usec): min=6277, max=12259, avg=9392.11, stdev=1857.34 00:19:51.753 lat (usec): min=6280, max=12263, avg=9464.26, stdev=1862.43 00:19:51.753 clat percentiles (usec): 00:19:51.753 | 1.00th=[ 6587], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7504], 00:19:51.753 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[10421], 60.00th=[10683], 00:19:51.753 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11600], 00:19:51.753 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:19:51.753 | 99.99th=[12256] 00:19:51.753 bw ( KiB/s): min=23304, max=29944, per=27.45%, avg=26624.00, stdev=4695.19, samples=2 00:19:51.753 iops : min= 5826, max= 7486, avg=6656.00, stdev=1173.80, samples=2 00:19:51.753 lat (usec) : 750=0.01% 00:19:51.753 lat (msec) : 2=0.09%, 4=0.24%, 10=46.08%, 20=53.58% 00:19:51.753 cpu : usr=2.00%, sys=4.60%, ctx=1588, majf=0, minf=1 00:19:51.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:51.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.753 issued rwts: total=6509,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.753 00:19:51.753 Run status group 0 (all jobs): 00:19:51.753 READ: bw=93.1MiB/s (97.7MB/s), 19.9MiB/s-28.0MiB/s (20.9MB/s-29.3MB/s), io=93.4MiB (98.0MB), run=1001-1003msec 00:19:51.754 WRITE: bw=94.7MiB/s (99.3MB/s), 20.2MiB/s-28.3MiB/s (21.2MB/s-29.7MB/s), io=95.0MiB (99.6MB), run=1001-1003msec 00:19:51.754 00:19:51.754 Disk stats (read/write): 00:19:51.754 nvme0n1: ios=4246/4608, merge=0/0, ticks=12985/13454, in_queue=26439, util=84.35% 00:19:51.754 nvme0n2: ios=5431/5632, merge=0/0, ticks=13459/12951, in_queue=26410, util=85.29% 00:19:51.754 nvme0n3: ios=4188/4608, merge=0/0, ticks=12952/13510, in_queue=26462, util=88.45% 00:19:51.754 nvme0n4: ios=5120/5240, merge=0/0, ticks=13517/12966, in_queue=26483, util=89.50% 00:19:51.754 11:47:22 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:51.754 [global] 00:19:51.754 thread=1 00:19:51.754 invalidate=1 00:19:51.754 rw=randwrite 00:19:51.754 time_based=1 00:19:51.754 runtime=1 00:19:51.754 ioengine=libaio 00:19:51.754 direct=1 00:19:51.754 bs=4096 00:19:51.754 iodepth=128 00:19:51.754 norandommap=0 00:19:51.754 numjobs=1 00:19:51.754 00:19:51.754 verify_dump=1 00:19:51.754 verify_backlog=512 00:19:51.754 verify_state_save=0 00:19:51.754 do_verify=1 00:19:51.754 verify=crc32c-intel 00:19:51.754 [job0] 00:19:51.754 filename=/dev/nvme0n1 00:19:51.754 [job1] 00:19:51.754 filename=/dev/nvme0n2 00:19:51.754 [job2] 00:19:51.754 filename=/dev/nvme0n3 00:19:51.754 [job3] 00:19:51.754 filename=/dev/nvme0n4 00:19:51.754 Could not set queue depth (nvme0n1) 00:19:51.754 Could not set queue depth (nvme0n2) 00:19:51.754 Could not set queue depth (nvme0n3) 00:19:51.754 Could not set queue depth (nvme0n4) 00:19:52.012 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:52.012 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:52.012 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:52.012 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:52.012 fio-3.35 00:19:52.012 Starting 4 threads 00:19:53.385 00:19:53.385 job0: (groupid=0, jobs=1): err= 0: pid=3777123: Tue Dec 3 11:47:23 2024 00:19:53.385 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:19:53.385 slat (usec): min=2, max=1837, avg=85.25, stdev=282.99 00:19:53.385 clat (usec): min=6145, max=12902, avg=11114.39, stdev=1440.83 00:19:53.385 lat (usec): min=6148, max=12907, avg=11199.64, stdev=1426.78 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[10028], 20.00th=[11338], 00:19:53.385 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:19:53.385 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:19:53.385 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12125], 99.95th=[12780], 00:19:53.385 | 99.99th=[12911] 00:19:53.385 write: IOPS=6105, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1003msec); 0 zone resets 00:19:53.385 slat (usec): min=2, max=1685, avg=81.71, stdev=267.99 00:19:53.385 clat (usec): min=2007, max=11981, avg=10519.22, stdev=1536.62 00:19:53.385 lat (usec): min=2783, max=12843, avg=10600.94, stdev=1524.47 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 8717], 20.00th=[10814], 00:19:53.385 | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:19:53.385 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11600], 00:19:53.385 | 99.00th=[11731], 99.50th=[11731], 99.90th=[11731], 99.95th=[11731], 00:19:53.385 | 99.99th=[11994] 00:19:53.385 bw ( KiB/s): min=23400, max=24576, per=25.76%, avg=23988.00, stdev=831.56, samples=2 00:19:53.385 iops : min= 5850, max= 6144, avg=5997.00, stdev=207.89, samples=2 00:19:53.385 lat (msec) : 4=0.26%, 10=10.98%, 20=88.75% 00:19:53.385 cpu : usr=3.09%, sys=4.59%, ctx=3032, majf=0, minf=1 00:19:53.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:53.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.385 issued rwts: total=5632,6124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.385 job1: (groupid=0, jobs=1): err= 0: pid=3777133: Tue Dec 3 11:47:23 2024 00:19:53.385 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:19:53.385 slat (usec): min=2, max=1475, avg=88.80, stdev=229.47 00:19:53.385 clat (usec): min=6935, max=14625, avg=11516.84, stdev=450.34 00:19:53.385 lat (usec): min=6938, max=15387, avg=11605.64, stdev=431.02 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[10552], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:19:53.385 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:19:53.385 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:19:53.385 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13960], 99.95th=[14615], 00:19:53.385 | 99.99th=[14615] 00:19:53.385 write: IOPS=5635, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1005msec); 0 zone resets 00:19:53.385 slat (usec): min=2, max=1456, avg=85.28, stdev=221.11 00:19:53.385 clat (usec): min=3252, max=13957, avg=11004.34, stdev=882.76 00:19:53.385 lat (usec): min=4655, max=14619, avg=11089.62, stdev=871.91 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:19:53.385 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:53.385 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[13566], 00:19:53.385 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:19:53.385 | 99.99th=[13960] 00:19:53.385 bw ( KiB/s): min=21408, max=23648, per=24.20%, avg=22528.00, stdev=1583.92, samples=2 00:19:53.385 iops : min= 5352, max= 5912, avg=5632.00, stdev=395.98, samples=2 00:19:53.385 lat (msec) : 4=0.01%, 10=0.85%, 20=99.14% 00:19:53.385 cpu : usr=1.79%, sys=4.98%, ctx=1664, majf=0, minf=1 00:19:53.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:53.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.385 issued rwts: total=5632,5664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.385 job2: (groupid=0, jobs=1): err= 0: pid=3777147: Tue Dec 3 11:47:23 2024 00:19:53.385 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:19:53.385 slat (usec): min=2, max=1517, avg=88.81, stdev=229.91 00:19:53.385 clat (usec): min=6182, max=15360, avg=11525.87, stdev=498.14 00:19:53.385 lat (usec): min=6185, max=15363, avg=11614.68, stdev=477.72 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11338], 00:19:53.385 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:19:53.385 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:19:53.385 | 99.00th=[12518], 99.50th=[13960], 99.90th=[15401], 99.95th=[15401], 00:19:53.385 | 99.99th=[15401] 00:19:53.385 write: IOPS=5621, BW=22.0MiB/s (23.0MB/s)(22.1MiB/1005msec); 0 zone resets 00:19:53.385 slat (usec): min=2, max=1470, avg=85.41, stdev=220.55 00:19:53.385 clat (usec): min=3924, max=14352, avg=11021.03, stdev=846.72 00:19:53.385 lat (usec): min=4641, max=14548, avg=11106.44, stdev=836.14 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:19:53.385 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:53.385 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[13566], 00:19:53.385 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:19:53.385 | 99.99th=[14353] 00:19:53.385 bw ( KiB/s): min=21344, max=23712, per=24.20%, avg=22528.00, stdev=1674.43, samples=2 00:19:53.385 iops : min= 5336, max= 5928, avg=5632.00, stdev=418.61, samples=2 00:19:53.385 lat (msec) : 4=0.01%, 10=0.85%, 20=99.14% 00:19:53.385 cpu : usr=1.89%, sys=4.98%, ctx=1659, majf=0, minf=1 00:19:53.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:53.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.385 issued rwts: total=5632,5650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.385 job3: (groupid=0, jobs=1): err= 0: pid=3777154: Tue Dec 3 11:47:23 2024 00:19:53.385 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:19:53.385 slat (usec): min=2, max=2428, avg=86.53, stdev=282.81 00:19:53.385 clat (usec): min=6064, max=12358, avg=11267.30, stdev=911.67 00:19:53.385 lat (usec): min=6964, max=12362, avg=11353.82, stdev=876.88 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[ 7963], 5.00th=[ 8291], 10.00th=[10552], 20.00th=[11076], 00:19:53.385 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:19:53.385 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:19:53.385 | 99.00th=[11994], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:19:53.385 | 99.99th=[12387] 00:19:53.385 write: IOPS=5931, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1004msec); 0 zone resets 00:19:53.385 slat (usec): min=2, max=2232, avg=82.62, stdev=268.09 00:19:53.385 clat (usec): min=2007, max=11979, avg=10682.87, stdev=1265.49 00:19:53.385 lat (usec): min=3176, max=12843, avg=10765.49, stdev=1245.55 00:19:53.385 clat percentiles (usec): 00:19:53.385 | 1.00th=[ 5604], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10683], 00:19:53.385 | 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:19:53.385 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:19:53.385 | 99.00th=[11731], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:19:53.385 | 99.99th=[11994] 00:19:53.385 bw ( KiB/s): min=22048, max=24576, per=25.04%, avg=23312.00, stdev=1787.57, samples=2 00:19:53.385 iops : min= 5512, max= 6144, avg=5828.00, stdev=446.89, samples=2 00:19:53.385 lat (msec) : 4=0.27%, 10=9.60%, 20=90.14% 00:19:53.385 cpu : usr=2.79%, sys=5.08%, ctx=2386, majf=0, minf=1 00:19:53.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:53.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.385 issued rwts: total=5632,5955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.385 00:19:53.385 Run status group 0 (all jobs): 00:19:53.385 READ: bw=87.6MiB/s (91.8MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=88.0MiB (92.3MB), run=1003-1005msec 00:19:53.385 WRITE: bw=90.9MiB/s (95.3MB/s), 22.0MiB/s-23.8MiB/s (23.0MB/s-25.0MB/s), io=91.4MiB (95.8MB), run=1003-1005msec 00:19:53.385 00:19:53.385 Disk stats (read/write): 00:19:53.385 nvme0n1: ios=4767/5120, merge=0/0, ticks=12869/13339, in_queue=26208, util=84.07% 00:19:53.386 nvme0n2: ios=4608/4719, merge=0/0, ticks=26175/25661, in_queue=51836, util=85.10% 00:19:53.386 nvme0n3: ios=4608/4723, merge=0/0, ticks=26270/25679, in_queue=51949, util=88.35% 00:19:53.386 nvme0n4: ios=4608/5059, merge=0/0, ticks=13376/14113, in_queue=27489, util=89.39% 00:19:53.386 11:47:23 -- target/fio.sh@55 -- # sync 00:19:53.386 11:47:23 -- target/fio.sh@59 -- # fio_pid=3777380 00:19:53.386 11:47:23 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:53.386 11:47:23 -- target/fio.sh@61 -- # sleep 3 00:19:53.386 [global] 00:19:53.386 thread=1 00:19:53.386 invalidate=1 00:19:53.386 rw=read 00:19:53.386 time_based=1 00:19:53.386 runtime=10 00:19:53.386 ioengine=libaio 00:19:53.386 direct=1 00:19:53.386 bs=4096 00:19:53.386 iodepth=1 00:19:53.386 norandommap=1 00:19:53.386 numjobs=1 00:19:53.386 00:19:53.386 [job0] 00:19:53.386 filename=/dev/nvme0n1 00:19:53.386 [job1] 00:19:53.386 filename=/dev/nvme0n2 00:19:53.386 [job2] 00:19:53.386 filename=/dev/nvme0n3 00:19:53.386 [job3] 00:19:53.386 filename=/dev/nvme0n4 00:19:53.386 Could not set queue depth (nvme0n1) 00:19:53.386 Could not set queue depth (nvme0n2) 00:19:53.386 Could not set queue depth (nvme0n3) 00:19:53.386 Could not set queue depth (nvme0n4) 00:19:53.644 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.644 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.644 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.644 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.644 fio-3.35 00:19:53.644 Starting 4 threads 00:19:56.173 11:47:26 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:56.431 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=76111872, buflen=4096 00:19:56.431 fio: pid=3777590, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:56.431 11:47:26 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:56.689 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=81489920, buflen=4096 00:19:56.689 fio: pid=3777582, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:56.689 11:47:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.689 11:47:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:56.689 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45056000, buflen=4096 00:19:56.689 fio: pid=3777552, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:56.947 11:47:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.947 11:47:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:56.947 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35315712, buflen=4096 00:19:56.947 fio: pid=3777561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:56.947 00:19:56.947 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3777552: Tue Dec 3 11:47:27 2024 00:19:56.947 read: IOPS=9211, BW=36.0MiB/s (37.7MB/s)(107MiB/2973msec) 00:19:56.947 slat (usec): min=6, max=19878, avg=10.95, stdev=169.60 00:19:56.947 clat (usec): min=48, max=193, avg=95.89, stdev=22.03 00:19:56.947 lat (usec): min=57, max=19960, avg=106.84, stdev=170.91 00:19:56.947 clat percentiles (usec): 00:19:56.947 | 1.00th=[ 56], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 74], 00:19:56.947 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 100], 60.00th=[ 111], 00:19:56.947 | 70.00th=[ 115], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 126], 00:19:56.947 | 99.00th=[ 133], 99.50th=[ 139], 99.90th=[ 161], 99.95th=[ 165], 00:19:56.947 | 99.99th=[ 174] 00:19:56.947 bw ( KiB/s): min=30808, max=43832, per=32.14%, avg=36520.00, stdev=6420.51, samples=5 00:19:56.948 iops : min= 7702, max=10958, avg=9130.00, stdev=1605.13, samples=5 00:19:56.948 lat (usec) : 50=0.03%, 100=49.97%, 250=50.00% 00:19:56.948 cpu : usr=4.21%, sys=13.16%, ctx=27392, majf=0, minf=1 00:19:56.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 issued rwts: total=27385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.948 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3777561: Tue Dec 3 11:47:27 2024 00:19:56.948 read: IOPS=7817, BW=30.5MiB/s (32.0MB/s)(97.7MiB/3199msec) 00:19:56.948 slat (usec): min=7, max=15953, avg=12.95, stdev=230.84 00:19:56.948 clat (usec): min=44, max=21780, avg=112.69, stdev=150.23 00:19:56.948 lat (usec): min=54, max=21789, avg=125.64, stdev=275.15 00:19:56.948 clat percentiles (usec): 00:19:56.948 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 81], 00:19:56.948 | 30.00th=[ 104], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 119], 00:19:56.948 | 70.00th=[ 122], 80.00th=[ 129], 90.00th=[ 151], 95.00th=[ 159], 00:19:56.948 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 206], 99.95th=[ 212], 00:19:56.948 | 99.99th=[ 494] 00:19:56.948 bw ( KiB/s): min=26280, max=34894, per=26.84%, avg=30491.67, stdev=3501.21, samples=6 00:19:56.948 iops : min= 6570, max= 8723, avg=7622.83, stdev=875.18, samples=6 00:19:56.948 lat (usec) : 50=0.05%, 100=26.12%, 250=73.81%, 500=0.01% 00:19:56.948 lat (msec) : 10=0.01%, 50=0.01% 00:19:56.948 cpu : usr=3.41%, sys=11.10%, ctx=25014, majf=0, minf=2 00:19:56.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 issued rwts: total=25007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.948 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3777582: Tue Dec 3 11:47:27 2024 00:19:56.948 read: IOPS=7103, BW=27.7MiB/s (29.1MB/s)(77.7MiB/2801msec) 00:19:56.948 slat (usec): min=7, max=15908, avg=11.05, stdev=140.44 00:19:56.948 clat (usec): min=64, max=21739, avg=127.28, stdev=154.70 00:19:56.948 lat (usec): min=72, max=21749, avg=138.34, stdev=208.74 00:19:56.948 clat percentiles (usec): 00:19:56.948 | 1.00th=[ 77], 5.00th=[ 86], 10.00th=[ 110], 20.00th=[ 116], 00:19:56.948 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:19:56.948 | 70.00th=[ 131], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 163], 00:19:56.948 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 212], 00:19:56.948 | 99.99th=[ 258] 00:19:56.948 bw ( KiB/s): min=25784, max=30808, per=25.09%, avg=28508.80, stdev=2466.83, samples=5 00:19:56.948 iops : min= 6446, max= 7702, avg=7127.20, stdev=616.71, samples=5 00:19:56.948 lat (usec) : 100=8.26%, 250=91.73%, 500=0.01% 00:19:56.948 lat (msec) : 50=0.01% 00:19:56.948 cpu : usr=3.89%, sys=9.71%, ctx=19898, majf=0, minf=2 00:19:56.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 issued rwts: total=19896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.948 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3777590: Tue Dec 3 11:47:27 2024 00:19:56.948 read: IOPS=7100, BW=27.7MiB/s (29.1MB/s)(72.6MiB/2617msec) 00:19:56.948 slat (nsec): min=8480, max=36481, avg=10023.37, stdev=2490.01 00:19:56.948 clat (usec): min=70, max=237, avg=128.16, stdev=18.18 00:19:56.948 lat (usec): min=79, max=246, avg=138.19, stdev=18.34 00:19:56.948 clat percentiles (usec): 00:19:56.948 | 1.00th=[ 87], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:19:56.948 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:19:56.948 | 70.00th=[ 131], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 161], 00:19:56.948 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 208], 00:19:56.948 | 99.99th=[ 215] 00:19:56.948 bw ( KiB/s): min=26224, max=30800, per=25.22%, avg=28657.60, stdev=2220.84, samples=5 00:19:56.948 iops : min= 6556, max= 7700, avg=7164.40, stdev=555.21, samples=5 00:19:56.948 lat (usec) : 100=2.78%, 250=97.22% 00:19:56.948 cpu : usr=3.33%, sys=10.13%, ctx=18583, majf=0, minf=2 00:19:56.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.948 issued rwts: total=18583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.948 00:19:56.948 Run status group 0 (all jobs): 00:19:56.948 READ: bw=111MiB/s (116MB/s), 27.7MiB/s-36.0MiB/s (29.1MB/s-37.7MB/s), io=355MiB (372MB), run=2617-3199msec 00:19:56.948 00:19:56.948 Disk stats (read/write): 00:19:56.948 nvme0n1: ios=25676/0, merge=0/0, ticks=2294/0, in_queue=2294, util=92.75% 00:19:56.948 nvme0n2: ios=23400/0, merge=0/0, ticks=2511/0, in_queue=2511, util=92.30% 00:19:56.948 nvme0n3: ios=18276/0, merge=0/0, ticks=2195/0, in_queue=2195, util=95.89% 00:19:56.948 nvme0n4: ios=18370/0, merge=0/0, ticks=2235/0, in_queue=2235, util=96.43% 00:19:56.948 11:47:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.948 11:47:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:57.206 11:47:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.206 11:47:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:57.464 11:47:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.464 11:47:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:57.722 11:47:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.722 11:47:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:57.981 11:47:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:57.981 11:47:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:57.981 11:47:28 -- target/fio.sh@69 -- # fio_status=0 00:19:57.981 11:47:28 -- target/fio.sh@70 -- # wait 3777380 00:19:57.981 11:47:28 -- target/fio.sh@70 -- # fio_status=4 00:19:57.981 11:47:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:58.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:58.914 11:47:29 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:58.914 11:47:29 -- common/autotest_common.sh@1208 -- # local i=0 00:19:58.914 11:47:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:58.914 11:47:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:58.914 11:47:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:58.914 11:47:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:58.914 11:47:29 -- common/autotest_common.sh@1220 -- # return 0 00:19:58.914 11:47:29 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:58.914 11:47:29 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:58.914 nvmf hotplug test: fio failed as expected 00:19:58.914 11:47:29 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.173 11:47:29 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:59.173 11:47:29 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:59.173 11:47:29 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:59.173 11:47:29 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:59.173 11:47:29 -- target/fio.sh@91 -- # nvmftestfini 00:19:59.173 11:47:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:59.173 11:47:29 -- nvmf/common.sh@116 -- # sync 00:19:59.173 11:47:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:59.173 11:47:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:59.173 11:47:29 -- nvmf/common.sh@119 -- # set +e 00:19:59.173 11:47:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:59.173 11:47:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:59.173 rmmod nvme_rdma 00:19:59.173 rmmod nvme_fabrics 00:19:59.173 11:47:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:59.173 11:47:29 -- nvmf/common.sh@123 -- # set -e 00:19:59.173 11:47:29 -- nvmf/common.sh@124 -- # return 0 00:19:59.173 11:47:29 -- nvmf/common.sh@477 -- # '[' -n 3774409 ']' 00:19:59.173 11:47:29 -- nvmf/common.sh@478 -- # killprocess 3774409 00:19:59.173 11:47:29 -- common/autotest_common.sh@936 -- # '[' -z 3774409 ']' 00:19:59.173 11:47:29 -- common/autotest_common.sh@940 -- # kill -0 3774409 00:19:59.173 11:47:29 -- common/autotest_common.sh@941 -- # uname 00:19:59.173 11:47:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.173 11:47:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3774409 00:19:59.431 11:47:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.431 11:47:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.431 11:47:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3774409' 00:19:59.431 killing process with pid 3774409 00:19:59.431 11:47:29 -- common/autotest_common.sh@955 -- # kill 3774409 00:19:59.431 11:47:29 -- common/autotest_common.sh@960 -- # wait 3774409 00:19:59.690 11:47:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.690 11:47:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:59.690 00:19:59.690 real 0m27.121s 00:19:59.690 user 2m8.408s 00:19:59.690 sys 0m10.368s 00:19:59.690 11:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.690 11:47:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 ************************************ 00:19:59.690 END TEST nvmf_fio_target 00:19:59.690 ************************************ 00:19:59.690 11:47:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:59.690 11:47:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.690 11:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.690 11:47:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 ************************************ 00:19:59.690 START TEST nvmf_bdevio 00:19:59.690 ************************************ 00:19:59.690 11:47:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:59.690 * Looking for test storage... 00:19:59.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:59.690 11:47:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:59.690 11:47:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:59.690 11:47:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:59.949 11:47:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:59.949 11:47:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:59.949 11:47:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:59.949 11:47:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:59.949 11:47:30 -- scripts/common.sh@335 -- # IFS=.-: 00:19:59.949 11:47:30 -- scripts/common.sh@335 -- # read -ra ver1 00:19:59.949 11:47:30 -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.949 11:47:30 -- scripts/common.sh@336 -- # read -ra ver2 00:19:59.949 11:47:30 -- scripts/common.sh@337 -- # local 'op=<' 00:19:59.949 11:47:30 -- scripts/common.sh@339 -- # ver1_l=2 00:19:59.949 11:47:30 -- scripts/common.sh@340 -- # ver2_l=1 00:19:59.949 11:47:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:59.949 11:47:30 -- scripts/common.sh@343 -- # case "$op" in 00:19:59.949 11:47:30 -- scripts/common.sh@344 -- # : 1 00:19:59.949 11:47:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:59.949 11:47:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.949 11:47:30 -- scripts/common.sh@364 -- # decimal 1 00:19:59.949 11:47:30 -- scripts/common.sh@352 -- # local d=1 00:19:59.949 11:47:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.949 11:47:30 -- scripts/common.sh@354 -- # echo 1 00:19:59.949 11:47:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:59.949 11:47:30 -- scripts/common.sh@365 -- # decimal 2 00:19:59.949 11:47:30 -- scripts/common.sh@352 -- # local d=2 00:19:59.949 11:47:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.949 11:47:30 -- scripts/common.sh@354 -- # echo 2 00:19:59.949 11:47:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:59.949 11:47:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:59.949 11:47:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:59.949 11:47:30 -- scripts/common.sh@367 -- # return 0 00:19:59.949 11:47:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.949 11:47:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:59.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.949 --rc genhtml_branch_coverage=1 00:19:59.949 --rc genhtml_function_coverage=1 00:19:59.949 --rc genhtml_legend=1 00:19:59.949 --rc geninfo_all_blocks=1 00:19:59.949 --rc geninfo_unexecuted_blocks=1 00:19:59.949 00:19:59.950 ' 00:19:59.950 11:47:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:59.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.950 --rc genhtml_branch_coverage=1 00:19:59.950 --rc genhtml_function_coverage=1 00:19:59.950 --rc genhtml_legend=1 00:19:59.950 --rc geninfo_all_blocks=1 00:19:59.950 --rc geninfo_unexecuted_blocks=1 00:19:59.950 00:19:59.950 ' 00:19:59.950 11:47:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:59.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.950 --rc genhtml_branch_coverage=1 00:19:59.950 --rc genhtml_function_coverage=1 00:19:59.950 --rc genhtml_legend=1 00:19:59.950 --rc geninfo_all_blocks=1 00:19:59.950 --rc geninfo_unexecuted_blocks=1 00:19:59.950 00:19:59.950 ' 00:19:59.950 11:47:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:59.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.950 --rc genhtml_branch_coverage=1 00:19:59.950 --rc genhtml_function_coverage=1 00:19:59.950 --rc genhtml_legend=1 00:19:59.950 --rc geninfo_all_blocks=1 00:19:59.950 --rc geninfo_unexecuted_blocks=1 00:19:59.950 00:19:59.950 ' 00:19:59.950 11:47:30 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.950 11:47:30 -- nvmf/common.sh@7 -- # uname -s 00:19:59.950 11:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.950 11:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.950 11:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.950 11:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.950 11:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.950 11:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.950 11:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.950 11:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.950 11:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.950 11:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.950 11:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:59.950 11:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:59.950 11:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.950 11:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.950 11:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.950 11:47:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:59.950 11:47:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.950 11:47:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.950 11:47:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.950 11:47:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.950 11:47:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.950 11:47:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.950 11:47:30 -- paths/export.sh@5 -- # export PATH 00:19:59.950 11:47:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.950 11:47:30 -- nvmf/common.sh@46 -- # : 0 00:19:59.950 11:47:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.950 11:47:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.950 11:47:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.950 11:47:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.950 11:47:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.950 11:47:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.950 11:47:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.950 11:47:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.950 11:47:30 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.950 11:47:30 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.950 11:47:30 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:59.950 11:47:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:59.950 11:47:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.950 11:47:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.950 11:47:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.950 11:47:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.950 11:47:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.950 11:47:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.950 11:47:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.950 11:47:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:59.950 11:47:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:59.950 11:47:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:59.950 11:47:30 -- common/autotest_common.sh@10 -- # set +x 00:20:06.515 11:47:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:06.515 11:47:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:06.515 11:47:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:06.515 11:47:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:06.515 11:47:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:06.515 11:47:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:06.515 11:47:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:06.515 11:47:36 -- nvmf/common.sh@294 -- # net_devs=() 00:20:06.515 11:47:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:06.515 11:47:36 -- nvmf/common.sh@295 -- # e810=() 00:20:06.515 11:47:36 -- nvmf/common.sh@295 -- # local -ga e810 00:20:06.515 11:47:36 -- nvmf/common.sh@296 -- # x722=() 00:20:06.515 11:47:36 -- nvmf/common.sh@296 -- # local -ga x722 00:20:06.515 11:47:36 -- nvmf/common.sh@297 -- # mlx=() 00:20:06.515 11:47:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:06.515 11:47:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.515 11:47:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:06.515 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:06.515 11:47:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:06.515 11:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:06.515 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:06.515 11:47:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:06.515 11:47:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.515 11:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.515 11:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:06.515 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:06.515 11:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.515 11:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.515 11:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:06.515 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:06.515 11:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.515 11:47:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:06.515 11:47:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:06.515 11:47:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:06.515 11:47:36 -- nvmf/common.sh@57 -- # uname 00:20:06.515 11:47:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:06.515 11:47:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:06.515 11:47:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:06.515 11:47:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:06.515 11:47:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:06.515 11:47:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:06.515 11:47:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:06.515 11:47:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:06.515 11:47:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:06.515 11:47:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:06.515 11:47:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:06.515 11:47:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:06.515 11:47:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:06.515 11:47:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:06.515 11:47:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:06.515 11:47:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:06.515 11:47:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:06.515 11:47:36 -- nvmf/common.sh@104 -- # continue 2 00:20:06.515 11:47:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.515 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:06.515 11:47:36 -- nvmf/common.sh@104 -- # continue 2 00:20:06.515 11:47:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:06.515 11:47:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:06.515 11:47:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:06.515 11:47:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:06.515 11:47:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:06.515 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:06.515 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:06.515 altname enp217s0f0np0 00:20:06.515 altname ens818f0np0 00:20:06.515 inet 192.168.100.8/24 scope global mlx_0_0 00:20:06.515 valid_lft forever preferred_lft forever 00:20:06.515 11:47:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:06.515 11:47:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:06.515 11:47:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:06.515 11:47:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:06.515 11:47:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:06.515 11:47:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:06.515 11:47:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:06.515 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:06.516 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:06.516 altname enp217s0f1np1 00:20:06.516 altname ens818f1np1 00:20:06.516 inet 192.168.100.9/24 scope global mlx_0_1 00:20:06.516 valid_lft forever preferred_lft forever 00:20:06.516 11:47:36 -- nvmf/common.sh@410 -- # return 0 00:20:06.516 11:47:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.516 11:47:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:06.516 11:47:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:06.516 11:47:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:06.516 11:47:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:06.516 11:47:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:06.516 11:47:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:06.516 11:47:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:06.516 11:47:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:06.516 11:47:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:06.516 11:47:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:06.516 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.516 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:06.516 11:47:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:06.516 11:47:36 -- nvmf/common.sh@104 -- # continue 2 00:20:06.516 11:47:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:06.516 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.516 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:06.516 11:47:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.516 11:47:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:06.516 11:47:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:06.516 11:47:36 -- nvmf/common.sh@104 -- # continue 2 00:20:06.516 11:47:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:06.516 11:47:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:06.516 11:47:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:06.516 11:47:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:06.516 11:47:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:06.516 11:47:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:06.516 11:47:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:06.516 11:47:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:06.516 192.168.100.9' 00:20:06.516 11:47:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:06.516 192.168.100.9' 00:20:06.516 11:47:36 -- nvmf/common.sh@445 -- # head -n 1 00:20:06.516 11:47:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:06.516 11:47:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:06.516 192.168.100.9' 00:20:06.516 11:47:36 -- nvmf/common.sh@446 -- # tail -n +2 00:20:06.516 11:47:36 -- nvmf/common.sh@446 -- # head -n 1 00:20:06.516 11:47:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:06.516 11:47:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:06.516 11:47:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:06.516 11:47:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:06.516 11:47:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:06.516 11:47:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:06.516 11:47:36 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:06.516 11:47:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.516 11:47:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.516 11:47:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.516 11:47:36 -- nvmf/common.sh@469 -- # nvmfpid=3781830 00:20:06.516 11:47:36 -- nvmf/common.sh@470 -- # waitforlisten 3781830 00:20:06.516 11:47:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:06.516 11:47:36 -- common/autotest_common.sh@829 -- # '[' -z 3781830 ']' 00:20:06.516 11:47:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.516 11:47:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.516 11:47:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.516 11:47:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.516 11:47:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.516 [2024-12-03 11:47:36.945952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:06.516 [2024-12-03 11:47:36.946010] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.516 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.516 [2024-12-03 11:47:37.019195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.516 [2024-12-03 11:47:37.091251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.516 [2024-12-03 11:47:37.091365] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.516 [2024-12-03 11:47:37.091375] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.516 [2024-12-03 11:47:37.091384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.516 [2024-12-03 11:47:37.091502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:06.516 [2024-12-03 11:47:37.091612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:06.516 [2024-12-03 11:47:37.091721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.516 [2024-12-03 11:47:37.091723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:07.452 11:47:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.452 11:47:37 -- common/autotest_common.sh@862 -- # return 0 00:20:07.452 11:47:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:07.452 11:47:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:07.452 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 11:47:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.452 11:47:37 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:07.452 11:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.452 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 [2024-12-03 11:47:37.843170] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd7e970/0xd82e60) succeed. 00:20:07.452 [2024-12-03 11:47:37.852298] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd7ff60/0xdc4500) succeed. 00:20:07.452 11:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.452 11:47:37 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.452 11:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.452 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 Malloc0 00:20:07.452 11:47:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.452 11:47:37 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.452 11:47:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.452 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 11:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.452 11:47:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.452 11:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.452 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 11:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.452 11:47:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:07.452 11:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.452 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:07.452 [2024-12-03 11:47:38.022857] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:07.452 11:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.452 11:47:38 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:07.452 11:47:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:07.452 11:47:38 -- nvmf/common.sh@520 -- # config=() 00:20:07.452 11:47:38 -- nvmf/common.sh@520 -- # local subsystem config 00:20:07.452 11:47:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:07.452 11:47:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:07.452 { 00:20:07.452 "params": { 00:20:07.452 "name": "Nvme$subsystem", 00:20:07.452 "trtype": "$TEST_TRANSPORT", 00:20:07.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.452 "adrfam": "ipv4", 00:20:07.452 "trsvcid": "$NVMF_PORT", 00:20:07.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.452 "hdgst": ${hdgst:-false}, 00:20:07.452 "ddgst": ${ddgst:-false} 00:20:07.452 }, 00:20:07.452 "method": "bdev_nvme_attach_controller" 00:20:07.452 } 00:20:07.452 EOF 00:20:07.452 )") 00:20:07.452 11:47:38 -- nvmf/common.sh@542 -- # cat 00:20:07.452 11:47:38 -- nvmf/common.sh@544 -- # jq . 00:20:07.452 11:47:38 -- nvmf/common.sh@545 -- # IFS=, 00:20:07.452 11:47:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:07.452 "params": { 00:20:07.452 "name": "Nvme1", 00:20:07.452 "trtype": "rdma", 00:20:07.452 "traddr": "192.168.100.8", 00:20:07.452 "adrfam": "ipv4", 00:20:07.452 "trsvcid": "4420", 00:20:07.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.452 "hdgst": false, 00:20:07.452 "ddgst": false 00:20:07.452 }, 00:20:07.452 "method": "bdev_nvme_attach_controller" 00:20:07.452 }' 00:20:07.710 [2024-12-03 11:47:38.071216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:07.710 [2024-12-03 11:47:38.071267] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782112 ] 00:20:07.710 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.710 [2024-12-03 11:47:38.141901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.710 [2024-12-03 11:47:38.212258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.710 [2024-12-03 11:47:38.212351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.710 [2024-12-03 11:47:38.212353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.968 [2024-12-03 11:47:38.384293] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:07.968 [2024-12-03 11:47:38.384324] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:07.968 I/O targets: 00:20:07.968 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:07.968 00:20:07.968 00:20:07.968 CUnit - A unit testing framework for C - Version 2.1-3 00:20:07.968 http://cunit.sourceforge.net/ 00:20:07.968 00:20:07.968 00:20:07.968 Suite: bdevio tests on: Nvme1n1 00:20:07.968 Test: blockdev write read block ...passed 00:20:07.968 Test: blockdev write zeroes read block ...passed 00:20:07.968 Test: blockdev write zeroes read no split ...passed 00:20:07.968 Test: blockdev write zeroes read split ...passed 00:20:07.968 Test: blockdev write zeroes read split partial ...passed 00:20:07.968 Test: blockdev reset ...[2024-12-03 11:47:38.414242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.968 [2024-12-03 11:47:38.436757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:07.968 [2024-12-03 11:47:38.463567] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.968 passed 00:20:07.968 Test: blockdev write read 8 blocks ...passed 00:20:07.968 Test: blockdev write read size > 128k ...passed 00:20:07.968 Test: blockdev write read invalid size ...passed 00:20:07.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:07.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:07.968 Test: blockdev write read max offset ...passed 00:20:07.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:07.968 Test: blockdev writev readv 8 blocks ...passed 00:20:07.968 Test: blockdev writev readv 30 x 1block ...passed 00:20:07.968 Test: blockdev writev readv block ...passed 00:20:07.968 Test: blockdev writev readv size > 128k ...passed 00:20:07.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:07.968 Test: blockdev comparev and writev ...[2024-12-03 11:47:38.466425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.466833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.466996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.467006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.467016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.968 [2024-12-03 11:47:38.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:07.968 passed 00:20:07.968 Test: blockdev nvme passthru rw ...passed 00:20:07.968 Test: blockdev nvme passthru vendor specific ...[2024-12-03 11:47:38.467287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:07.968 [2024-12-03 11:47:38.467299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.467346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:07.968 [2024-12-03 11:47:38.467356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.467404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:07.968 [2024-12-03 11:47:38.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:07.968 [2024-12-03 11:47:38.467461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:07.968 [2024-12-03 11:47:38.467471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.968 passed 00:20:07.968 Test: blockdev nvme admin passthru ...passed 00:20:07.968 Test: blockdev copy ...passed 00:20:07.968 00:20:07.968 Run Summary: Type Total Ran Passed Failed Inactive 00:20:07.968 suites 1 1 n/a 0 0 00:20:07.968 tests 23 23 23 0 0 00:20:07.968 asserts 152 152 152 0 n/a 00:20:07.968 00:20:07.968 Elapsed time = 0.170 seconds 00:20:08.226 11:47:38 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.226 11:47:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.226 11:47:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.226 11:47:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.226 11:47:38 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:08.226 11:47:38 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:08.226 11:47:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.226 11:47:38 -- nvmf/common.sh@116 -- # sync 00:20:08.226 11:47:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:08.226 11:47:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:08.226 11:47:38 -- nvmf/common.sh@119 -- # set +e 00:20:08.226 11:47:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.226 11:47:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:08.226 rmmod nvme_rdma 00:20:08.226 rmmod nvme_fabrics 00:20:08.226 11:47:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.226 11:47:38 -- nvmf/common.sh@123 -- # set -e 00:20:08.226 11:47:38 -- nvmf/common.sh@124 -- # return 0 00:20:08.226 11:47:38 -- nvmf/common.sh@477 -- # '[' -n 3781830 ']' 00:20:08.226 11:47:38 -- nvmf/common.sh@478 -- # killprocess 3781830 00:20:08.226 11:47:38 -- common/autotest_common.sh@936 -- # '[' -z 3781830 ']' 00:20:08.226 11:47:38 -- common/autotest_common.sh@940 -- # kill -0 3781830 00:20:08.226 11:47:38 -- common/autotest_common.sh@941 -- # uname 00:20:08.226 11:47:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.226 11:47:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3781830 00:20:08.226 11:47:38 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:08.226 11:47:38 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:08.226 11:47:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3781830' 00:20:08.226 killing process with pid 3781830 00:20:08.226 11:47:38 -- common/autotest_common.sh@955 -- # kill 3781830 00:20:08.226 11:47:38 -- common/autotest_common.sh@960 -- # wait 3781830 00:20:08.793 11:47:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.793 11:47:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:08.793 00:20:08.793 real 0m8.958s 00:20:08.793 user 0m10.860s 00:20:08.793 sys 0m5.674s 00:20:08.793 11:47:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.793 11:47:39 -- common/autotest_common.sh@10 -- # set +x 00:20:08.793 ************************************ 00:20:08.793 END TEST nvmf_bdevio 00:20:08.793 ************************************ 00:20:08.793 11:47:39 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:08.793 11:47:39 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:08.793 11:47:39 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:08.793 11:47:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.793 11:47:39 -- common/autotest_common.sh@10 -- # set +x 00:20:08.793 ************************************ 00:20:08.793 START TEST nvmf_fuzz 00:20:08.793 ************************************ 00:20:08.793 11:47:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:08.793 * Looking for test storage... 00:20:08.793 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:08.793 11:47:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.793 11:47:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.793 11:47:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.793 11:47:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.793 11:47:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.793 11:47:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.793 11:47:39 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.793 11:47:39 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.793 11:47:39 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.793 11:47:39 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.793 11:47:39 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.793 11:47:39 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.793 11:47:39 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.793 11:47:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.793 11:47:39 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.793 11:47:39 -- scripts/common.sh@344 -- # : 1 00:20:08.793 11:47:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.793 11:47:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.793 11:47:39 -- scripts/common.sh@364 -- # decimal 1 00:20:08.793 11:47:39 -- scripts/common.sh@352 -- # local d=1 00:20:08.793 11:47:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.793 11:47:39 -- scripts/common.sh@354 -- # echo 1 00:20:08.793 11:47:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.793 11:47:39 -- scripts/common.sh@365 -- # decimal 2 00:20:08.793 11:47:39 -- scripts/common.sh@352 -- # local d=2 00:20:08.793 11:47:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.793 11:47:39 -- scripts/common.sh@354 -- # echo 2 00:20:08.793 11:47:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.793 11:47:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.793 11:47:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.793 11:47:39 -- scripts/common.sh@367 -- # return 0 00:20:08.793 11:47:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.793 --rc genhtml_branch_coverage=1 00:20:08.793 --rc genhtml_function_coverage=1 00:20:08.793 --rc genhtml_legend=1 00:20:08.793 --rc geninfo_all_blocks=1 00:20:08.793 --rc geninfo_unexecuted_blocks=1 00:20:08.793 00:20:08.793 ' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.793 --rc genhtml_branch_coverage=1 00:20:08.793 --rc genhtml_function_coverage=1 00:20:08.793 --rc genhtml_legend=1 00:20:08.793 --rc geninfo_all_blocks=1 00:20:08.793 --rc geninfo_unexecuted_blocks=1 00:20:08.793 00:20:08.793 ' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.793 --rc genhtml_branch_coverage=1 00:20:08.793 --rc genhtml_function_coverage=1 00:20:08.793 --rc genhtml_legend=1 00:20:08.793 --rc geninfo_all_blocks=1 00:20:08.793 --rc geninfo_unexecuted_blocks=1 00:20:08.793 00:20:08.793 ' 00:20:08.793 11:47:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.793 --rc genhtml_branch_coverage=1 00:20:08.793 --rc genhtml_function_coverage=1 00:20:08.793 --rc genhtml_legend=1 00:20:08.793 --rc geninfo_all_blocks=1 00:20:08.793 --rc geninfo_unexecuted_blocks=1 00:20:08.793 00:20:08.793 ' 00:20:08.793 11:47:39 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.793 11:47:39 -- nvmf/common.sh@7 -- # uname -s 00:20:08.793 11:47:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.793 11:47:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.793 11:47:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.793 11:47:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.793 11:47:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.793 11:47:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.793 11:47:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.793 11:47:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.793 11:47:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.793 11:47:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.793 11:47:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:08.793 11:47:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:08.793 11:47:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.793 11:47:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.793 11:47:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.793 11:47:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:08.793 11:47:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.793 11:47:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.793 11:47:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.793 11:47:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:47:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:47:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:47:39 -- paths/export.sh@5 -- # export PATH 00:20:08.793 11:47:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.793 11:47:39 -- nvmf/common.sh@46 -- # : 0 00:20:08.793 11:47:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.793 11:47:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.793 11:47:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.793 11:47:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.793 11:47:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.793 11:47:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.793 11:47:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.793 11:47:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.793 11:47:39 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:08.793 11:47:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:08.793 11:47:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.793 11:47:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.793 11:47:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.793 11:47:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.793 11:47:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.793 11:47:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.793 11:47:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.793 11:47:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:08.793 11:47:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:08.793 11:47:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:08.793 11:47:39 -- common/autotest_common.sh@10 -- # set +x 00:20:15.341 11:47:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.341 11:47:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:15.341 11:47:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:15.341 11:47:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:15.341 11:47:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:15.341 11:47:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:15.341 11:47:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:15.341 11:47:45 -- nvmf/common.sh@294 -- # net_devs=() 00:20:15.342 11:47:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:15.342 11:47:45 -- nvmf/common.sh@295 -- # e810=() 00:20:15.342 11:47:45 -- nvmf/common.sh@295 -- # local -ga e810 00:20:15.342 11:47:45 -- nvmf/common.sh@296 -- # x722=() 00:20:15.342 11:47:45 -- nvmf/common.sh@296 -- # local -ga x722 00:20:15.342 11:47:45 -- nvmf/common.sh@297 -- # mlx=() 00:20:15.342 11:47:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:15.342 11:47:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.342 11:47:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:15.342 11:47:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:15.342 11:47:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:15.342 11:47:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:15.342 11:47:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:15.342 11:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.342 11:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:15.342 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:15.342 11:47:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.342 11:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.342 11:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:15.342 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:15.342 11:47:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.342 11:47:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:15.342 11:47:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:15.342 11:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.342 11:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.600 11:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.600 11:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.600 11:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:15.600 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:15.600 11:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.600 11:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.600 11:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.600 11:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.600 11:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.600 11:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:15.600 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:15.600 11:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.600 11:47:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:15.600 11:47:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:15.600 11:47:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:15.600 11:47:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:15.600 11:47:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:15.600 11:47:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:15.600 11:47:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:15.600 11:47:45 -- nvmf/common.sh@57 -- # uname 00:20:15.600 11:47:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:15.600 11:47:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:15.600 11:47:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:15.600 11:47:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:15.600 11:47:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:15.600 11:47:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:15.600 11:47:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:15.600 11:47:46 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:15.600 11:47:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:15.600 11:47:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:15.600 11:47:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:15.600 11:47:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.600 11:47:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.600 11:47:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.600 11:47:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.600 11:47:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:15.600 11:47:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@104 -- # continue 2 00:20:15.600 11:47:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@104 -- # continue 2 00:20:15.600 11:47:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.600 11:47:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.600 11:47:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:15.600 11:47:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:15.600 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.600 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:15.600 altname enp217s0f0np0 00:20:15.600 altname ens818f0np0 00:20:15.600 inet 192.168.100.8/24 scope global mlx_0_0 00:20:15.600 valid_lft forever preferred_lft forever 00:20:15.600 11:47:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.600 11:47:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.600 11:47:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:15.600 11:47:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:15.600 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.600 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:15.600 altname enp217s0f1np1 00:20:15.600 altname ens818f1np1 00:20:15.600 inet 192.168.100.9/24 scope global mlx_0_1 00:20:15.600 valid_lft forever preferred_lft forever 00:20:15.600 11:47:46 -- nvmf/common.sh@410 -- # return 0 00:20:15.600 11:47:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.600 11:47:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:15.600 11:47:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:15.600 11:47:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:15.600 11:47:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.600 11:47:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.600 11:47:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.600 11:47:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.600 11:47:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:15.600 11:47:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@104 -- # continue 2 00:20:15.600 11:47:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.600 11:47:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.600 11:47:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@104 -- # continue 2 00:20:15.600 11:47:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:15.600 11:47:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.600 11:47:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:15.600 11:47:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.600 11:47:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.600 11:47:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:15.600 192.168.100.9' 00:20:15.600 11:47:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:15.600 192.168.100.9' 00:20:15.600 11:47:46 -- nvmf/common.sh@445 -- # head -n 1 00:20:15.600 11:47:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:15.600 11:47:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:15.600 192.168.100.9' 00:20:15.600 11:47:46 -- nvmf/common.sh@446 -- # tail -n +2 00:20:15.600 11:47:46 -- nvmf/common.sh@446 -- # head -n 1 00:20:15.600 11:47:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:15.600 11:47:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:15.600 11:47:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:15.600 11:47:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:15.600 11:47:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:15.600 11:47:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:15.600 11:47:46 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3785567 00:20:15.600 11:47:46 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:15.600 11:47:46 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:15.600 11:47:46 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3785567 00:20:15.600 11:47:46 -- common/autotest_common.sh@829 -- # '[' -z 3785567 ']' 00:20:15.600 11:47:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.600 11:47:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.600 11:47:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.600 11:47:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.600 11:47:46 -- common/autotest_common.sh@10 -- # set +x 00:20:16.533 11:47:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.533 11:47:47 -- common/autotest_common.sh@862 -- # return 0 00:20:16.533 11:47:47 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:16.533 11:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.533 11:47:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.533 11:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.533 11:47:47 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:16.533 11:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.533 11:47:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.533 Malloc0 00:20:16.533 11:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.533 11:47:47 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.791 11:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.791 11:47:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.791 11:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.791 11:47:47 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.791 11:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.791 11:47:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.791 11:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.791 11:47:47 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.791 11:47:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.791 11:47:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.791 11:47:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.791 11:47:47 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:16.791 11:47:47 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:20:48.839 Fuzzing completed. Shutting down the fuzz application 00:20:48.839 00:20:48.839 Dumping successful admin opcodes: 00:20:48.839 8, 9, 10, 24, 00:20:48.839 Dumping successful io opcodes: 00:20:48.839 0, 9, 00:20:48.839 NS: 0x200003af1f00 I/O qp, Total commands completed: 999255, total successful commands: 5852, random_seed: 978158080 00:20:48.839 NS: 0x200003af1f00 admin qp, Total commands completed: 129872, total successful commands: 1057, random_seed: 1833245824 00:20:48.839 11:48:17 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:48.839 Fuzzing completed. Shutting down the fuzz application 00:20:48.839 00:20:48.839 Dumping successful admin opcodes: 00:20:48.839 24, 00:20:48.839 Dumping successful io opcodes: 00:20:48.839 00:20:48.839 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2450357640 00:20:48.839 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2450436358 00:20:48.839 11:48:18 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.839 11:48:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.839 11:48:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.839 11:48:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.839 11:48:18 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:48.839 11:48:18 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:48.839 11:48:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.839 11:48:18 -- nvmf/common.sh@116 -- # sync 00:20:48.839 11:48:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:48.839 11:48:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:48.839 11:48:18 -- nvmf/common.sh@119 -- # set +e 00:20:48.839 11:48:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.839 11:48:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:48.839 rmmod nvme_rdma 00:20:48.839 rmmod nvme_fabrics 00:20:48.839 11:48:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.839 11:48:18 -- nvmf/common.sh@123 -- # set -e 00:20:48.839 11:48:18 -- nvmf/common.sh@124 -- # return 0 00:20:48.839 11:48:18 -- nvmf/common.sh@477 -- # '[' -n 3785567 ']' 00:20:48.839 11:48:18 -- nvmf/common.sh@478 -- # killprocess 3785567 00:20:48.839 11:48:18 -- common/autotest_common.sh@936 -- # '[' -z 3785567 ']' 00:20:48.839 11:48:19 -- common/autotest_common.sh@940 -- # kill -0 3785567 00:20:48.839 11:48:19 -- common/autotest_common.sh@941 -- # uname 00:20:48.839 11:48:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.839 11:48:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3785567 00:20:48.839 11:48:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.839 11:48:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.839 11:48:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3785567' 00:20:48.839 killing process with pid 3785567 00:20:48.839 11:48:19 -- common/autotest_common.sh@955 -- # kill 3785567 00:20:48.839 11:48:19 -- common/autotest_common.sh@960 -- # wait 3785567 00:20:48.839 11:48:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:48.839 11:48:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:48.839 11:48:19 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:48.839 00:20:48.839 real 0m40.212s 00:20:48.839 user 0m50.245s 00:20:48.839 sys 0m21.300s 00:20:48.839 11:48:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:48.839 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:48.839 ************************************ 00:20:48.839 END TEST nvmf_fuzz 00:20:48.839 ************************************ 00:20:48.839 11:48:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:48.839 11:48:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:48.839 11:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:48.839 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:48.839 ************************************ 00:20:48.839 START TEST nvmf_multiconnection 00:20:48.839 ************************************ 00:20:48.839 11:48:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:49.098 * Looking for test storage... 00:20:49.098 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:49.098 11:48:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:49.098 11:48:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:49.098 11:48:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:49.098 11:48:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:49.098 11:48:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:49.098 11:48:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:49.098 11:48:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:49.098 11:48:19 -- scripts/common.sh@335 -- # IFS=.-: 00:20:49.098 11:48:19 -- scripts/common.sh@335 -- # read -ra ver1 00:20:49.098 11:48:19 -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.098 11:48:19 -- scripts/common.sh@336 -- # read -ra ver2 00:20:49.098 11:48:19 -- scripts/common.sh@337 -- # local 'op=<' 00:20:49.098 11:48:19 -- scripts/common.sh@339 -- # ver1_l=2 00:20:49.098 11:48:19 -- scripts/common.sh@340 -- # ver2_l=1 00:20:49.098 11:48:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:49.098 11:48:19 -- scripts/common.sh@343 -- # case "$op" in 00:20:49.098 11:48:19 -- scripts/common.sh@344 -- # : 1 00:20:49.098 11:48:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:49.098 11:48:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.098 11:48:19 -- scripts/common.sh@364 -- # decimal 1 00:20:49.098 11:48:19 -- scripts/common.sh@352 -- # local d=1 00:20:49.098 11:48:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.098 11:48:19 -- scripts/common.sh@354 -- # echo 1 00:20:49.098 11:48:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:49.098 11:48:19 -- scripts/common.sh@365 -- # decimal 2 00:20:49.098 11:48:19 -- scripts/common.sh@352 -- # local d=2 00:20:49.098 11:48:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.098 11:48:19 -- scripts/common.sh@354 -- # echo 2 00:20:49.098 11:48:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:49.098 11:48:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:49.098 11:48:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:49.098 11:48:19 -- scripts/common.sh@367 -- # return 0 00:20:49.098 11:48:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.098 11:48:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:49.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.098 --rc genhtml_branch_coverage=1 00:20:49.098 --rc genhtml_function_coverage=1 00:20:49.098 --rc genhtml_legend=1 00:20:49.098 --rc geninfo_all_blocks=1 00:20:49.098 --rc geninfo_unexecuted_blocks=1 00:20:49.098 00:20:49.098 ' 00:20:49.098 11:48:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:49.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.098 --rc genhtml_branch_coverage=1 00:20:49.098 --rc genhtml_function_coverage=1 00:20:49.098 --rc genhtml_legend=1 00:20:49.098 --rc geninfo_all_blocks=1 00:20:49.098 --rc geninfo_unexecuted_blocks=1 00:20:49.098 00:20:49.098 ' 00:20:49.098 11:48:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:49.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.098 --rc genhtml_branch_coverage=1 00:20:49.098 --rc genhtml_function_coverage=1 00:20:49.098 --rc genhtml_legend=1 00:20:49.098 --rc geninfo_all_blocks=1 00:20:49.098 --rc geninfo_unexecuted_blocks=1 00:20:49.098 00:20:49.098 ' 00:20:49.098 11:48:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:49.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.098 --rc genhtml_branch_coverage=1 00:20:49.098 --rc genhtml_function_coverage=1 00:20:49.098 --rc genhtml_legend=1 00:20:49.098 --rc geninfo_all_blocks=1 00:20:49.098 --rc geninfo_unexecuted_blocks=1 00:20:49.098 00:20:49.098 ' 00:20:49.098 11:48:19 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.098 11:48:19 -- nvmf/common.sh@7 -- # uname -s 00:20:49.098 11:48:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.098 11:48:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.098 11:48:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.098 11:48:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.098 11:48:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.098 11:48:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.098 11:48:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.098 11:48:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.098 11:48:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.098 11:48:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.098 11:48:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:49.098 11:48:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:49.098 11:48:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.098 11:48:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.098 11:48:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.098 11:48:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:49.098 11:48:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.098 11:48:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.098 11:48:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.099 11:48:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.099 11:48:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.099 11:48:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.099 11:48:19 -- paths/export.sh@5 -- # export PATH 00:20:49.099 11:48:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.099 11:48:19 -- nvmf/common.sh@46 -- # : 0 00:20:49.099 11:48:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.099 11:48:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.099 11:48:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.099 11:48:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.099 11:48:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.099 11:48:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.099 11:48:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.099 11:48:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.099 11:48:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:49.099 11:48:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:49.099 11:48:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:49.099 11:48:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:49.099 11:48:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:49.099 11:48:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.099 11:48:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:49.099 11:48:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:49.099 11:48:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:49.099 11:48:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.099 11:48:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.099 11:48:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.099 11:48:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:49.099 11:48:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:49.099 11:48:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:49.099 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.657 11:48:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.657 11:48:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:55.657 11:48:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:55.657 11:48:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:55.657 11:48:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:55.657 11:48:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:55.657 11:48:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:55.657 11:48:26 -- nvmf/common.sh@294 -- # net_devs=() 00:20:55.657 11:48:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:55.657 11:48:26 -- nvmf/common.sh@295 -- # e810=() 00:20:55.657 11:48:26 -- nvmf/common.sh@295 -- # local -ga e810 00:20:55.657 11:48:26 -- nvmf/common.sh@296 -- # x722=() 00:20:55.658 11:48:26 -- nvmf/common.sh@296 -- # local -ga x722 00:20:55.658 11:48:26 -- nvmf/common.sh@297 -- # mlx=() 00:20:55.658 11:48:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:55.658 11:48:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.658 11:48:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:55.658 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:55.658 11:48:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:55.658 11:48:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:55.658 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:55.658 11:48:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:55.658 11:48:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.658 11:48:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.658 11:48:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:55.658 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:55.658 11:48:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.658 11:48:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.658 11:48:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:55.658 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:55.658 11:48:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.658 11:48:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:55.658 11:48:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:55.658 11:48:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:55.658 11:48:26 -- nvmf/common.sh@57 -- # uname 00:20:55.658 11:48:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:55.658 11:48:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:55.658 11:48:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:55.658 11:48:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:55.658 11:48:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:55.658 11:48:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:55.658 11:48:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:55.658 11:48:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:55.658 11:48:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:55.658 11:48:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:55.658 11:48:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:55.658 11:48:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:55.658 11:48:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:55.658 11:48:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:55.658 11:48:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:55.658 11:48:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:55.658 11:48:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:55.658 11:48:26 -- nvmf/common.sh@104 -- # continue 2 00:20:55.658 11:48:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.658 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:55.658 11:48:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:55.658 11:48:26 -- nvmf/common.sh@104 -- # continue 2 00:20:55.658 11:48:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:55.658 11:48:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:55.658 11:48:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:55.658 11:48:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:55.658 11:48:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:55.658 11:48:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:55.917 11:48:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:55.917 11:48:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:55.917 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:55.917 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:55.917 altname enp217s0f0np0 00:20:55.917 altname ens818f0np0 00:20:55.917 inet 192.168.100.8/24 scope global mlx_0_0 00:20:55.917 valid_lft forever preferred_lft forever 00:20:55.917 11:48:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:55.917 11:48:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:55.917 11:48:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:55.917 11:48:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:55.917 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:55.917 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:55.917 altname enp217s0f1np1 00:20:55.917 altname ens818f1np1 00:20:55.917 inet 192.168.100.9/24 scope global mlx_0_1 00:20:55.917 valid_lft forever preferred_lft forever 00:20:55.917 11:48:26 -- nvmf/common.sh@410 -- # return 0 00:20:55.917 11:48:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.917 11:48:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:55.917 11:48:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:55.917 11:48:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:55.917 11:48:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:55.917 11:48:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:55.917 11:48:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:55.917 11:48:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:55.917 11:48:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:55.917 11:48:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:55.917 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.917 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:55.917 11:48:26 -- nvmf/common.sh@104 -- # continue 2 00:20:55.917 11:48:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:55.917 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.917 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:55.917 11:48:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:55.917 11:48:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@104 -- # continue 2 00:20:55.917 11:48:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:55.917 11:48:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:55.917 11:48:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:55.917 11:48:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:55.917 11:48:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:55.917 11:48:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:55.917 11:48:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:55.917 192.168.100.9' 00:20:55.917 11:48:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:55.917 192.168.100.9' 00:20:55.917 11:48:26 -- nvmf/common.sh@445 -- # head -n 1 00:20:55.917 11:48:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:55.917 11:48:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:55.917 192.168.100.9' 00:20:55.917 11:48:26 -- nvmf/common.sh@446 -- # tail -n +2 00:20:55.917 11:48:26 -- nvmf/common.sh@446 -- # head -n 1 00:20:55.917 11:48:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:55.917 11:48:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:55.917 11:48:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:55.917 11:48:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:55.917 11:48:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:55.917 11:48:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:55.917 11:48:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:55.917 11:48:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:55.917 11:48:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.917 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:55.917 11:48:26 -- nvmf/common.sh@469 -- # nvmfpid=3795153 00:20:55.917 11:48:26 -- nvmf/common.sh@470 -- # waitforlisten 3795153 00:20:55.917 11:48:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.917 11:48:26 -- common/autotest_common.sh@829 -- # '[' -z 3795153 ']' 00:20:55.917 11:48:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.917 11:48:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.917 11:48:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.917 11:48:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.917 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:55.917 [2024-12-03 11:48:26.469306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:55.917 [2024-12-03 11:48:26.469359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.917 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.175 [2024-12-03 11:48:26.540345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.175 [2024-12-03 11:48:26.611499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:56.175 [2024-12-03 11:48:26.611617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.175 [2024-12-03 11:48:26.611627] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.175 [2024-12-03 11:48:26.611636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.175 [2024-12-03 11:48:26.611691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.175 [2024-12-03 11:48:26.611785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.175 [2024-12-03 11:48:26.611899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.175 [2024-12-03 11:48:26.611901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.739 11:48:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.739 11:48:27 -- common/autotest_common.sh@862 -- # return 0 00:20:56.739 11:48:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:56.739 11:48:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.739 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.739 11:48:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.739 11:48:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:56.739 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.739 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.739 [2024-12-03 11:48:27.352505] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24d1090/0x24d5580) succeed. 00:20:56.997 [2024-12-03 11:48:27.361704] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24d2680/0x2516c20) succeed. 00:20:56.997 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.997 11:48:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:56.997 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.997 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:56.997 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.997 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.997 Malloc1 00:20:56.997 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.997 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:56.997 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.997 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.997 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.997 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:56.997 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.997 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.997 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.997 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:56.997 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.997 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.997 [2024-12-03 11:48:27.540149] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:56.997 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.997 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.998 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:56.998 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.998 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 Malloc2 00:20:56.998 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.998 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:56.998 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.998 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.998 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:56.998 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.998 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.998 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:56.998 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.998 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.998 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.998 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:56.998 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.998 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 Malloc3 00:20:56.998 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.998 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.256 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 Malloc4 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.256 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 Malloc5 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.256 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 Malloc6 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.256 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 Malloc7 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.256 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.256 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:57.256 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.256 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 Malloc8 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.514 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 Malloc9 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.514 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 Malloc10 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 11:48:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.514 11:48:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:57.514 11:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.514 Malloc11 00:20:57.514 11:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.514 11:48:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:57.514 11:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.514 11:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 11:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.515 11:48:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:57.515 11:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.515 11:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 11:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.515 11:48:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:20:57.515 11:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.515 11:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 11:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.515 11:48:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:57.515 11:48:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.515 11:48:28 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:58.444 11:48:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:58.444 11:48:29 -- common/autotest_common.sh@1187 -- # local i=0 00:20:58.444 11:48:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:58.444 11:48:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:58.444 11:48:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:00.419 11:48:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:00.677 11:48:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:00.677 11:48:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:00.677 11:48:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:00.677 11:48:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:00.677 11:48:31 -- common/autotest_common.sh@1197 -- # return 0 00:21:00.677 11:48:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:00.677 11:48:31 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:01.611 11:48:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:01.611 11:48:32 -- common/autotest_common.sh@1187 -- # local i=0 00:21:01.611 11:48:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.611 11:48:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:01.611 11:48:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:03.514 11:48:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:03.514 11:48:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:03.514 11:48:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:03.514 11:48:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:03.514 11:48:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:03.514 11:48:34 -- common/autotest_common.sh@1197 -- # return 0 00:21:03.514 11:48:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.514 11:48:34 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:04.448 11:48:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:04.448 11:48:35 -- common/autotest_common.sh@1187 -- # local i=0 00:21:04.448 11:48:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.448 11:48:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:04.448 11:48:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:06.975 11:48:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:06.975 11:48:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:06.975 11:48:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:06.975 11:48:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:06.975 11:48:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.975 11:48:37 -- common/autotest_common.sh@1197 -- # return 0 00:21:06.975 11:48:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.975 11:48:37 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:07.541 11:48:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:07.541 11:48:38 -- common/autotest_common.sh@1187 -- # local i=0 00:21:07.541 11:48:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.541 11:48:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:07.541 11:48:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:10.070 11:48:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:10.070 11:48:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:10.070 11:48:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:10.070 11:48:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:10.070 11:48:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:10.070 11:48:40 -- common/autotest_common.sh@1197 -- # return 0 00:21:10.070 11:48:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.070 11:48:40 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:10.636 11:48:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:10.636 11:48:41 -- common/autotest_common.sh@1187 -- # local i=0 00:21:10.636 11:48:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:10.636 11:48:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:10.636 11:48:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:12.535 11:48:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:12.535 11:48:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:12.535 11:48:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:12.535 11:48:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:12.535 11:48:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:12.535 11:48:43 -- common/autotest_common.sh@1197 -- # return 0 00:21:12.535 11:48:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:12.535 11:48:43 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:13.469 11:48:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:13.469 11:48:44 -- common/autotest_common.sh@1187 -- # local i=0 00:21:13.469 11:48:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.469 11:48:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:13.469 11:48:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:15.990 11:48:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:15.990 11:48:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:15.990 11:48:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:15.990 11:48:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:15.990 11:48:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.990 11:48:46 -- common/autotest_common.sh@1197 -- # return 0 00:21:15.990 11:48:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.990 11:48:46 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:16.554 11:48:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:16.554 11:48:47 -- common/autotest_common.sh@1187 -- # local i=0 00:21:16.554 11:48:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.554 11:48:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:16.554 11:48:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:19.080 11:48:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:19.080 11:48:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:19.080 11:48:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:19.080 11:48:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:19.080 11:48:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.080 11:48:49 -- common/autotest_common.sh@1197 -- # return 0 00:21:19.080 11:48:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.080 11:48:49 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:19.646 11:48:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:19.646 11:48:50 -- common/autotest_common.sh@1187 -- # local i=0 00:21:19.646 11:48:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.646 11:48:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:19.646 11:48:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:21.547 11:48:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:21.547 11:48:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:21.547 11:48:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:21.547 11:48:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:21.547 11:48:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:21.547 11:48:52 -- common/autotest_common.sh@1197 -- # return 0 00:21:21.547 11:48:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.547 11:48:52 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:22.920 11:48:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:22.920 11:48:53 -- common/autotest_common.sh@1187 -- # local i=0 00:21:22.920 11:48:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.920 11:48:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:22.920 11:48:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:24.817 11:48:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:24.817 11:48:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:24.817 11:48:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:24.817 11:48:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:24.817 11:48:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.817 11:48:55 -- common/autotest_common.sh@1197 -- # return 0 00:21:24.817 11:48:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:24.817 11:48:55 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:25.749 11:48:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:25.749 11:48:56 -- common/autotest_common.sh@1187 -- # local i=0 00:21:25.749 11:48:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:25.749 11:48:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:25.749 11:48:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:27.668 11:48:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:27.668 11:48:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:27.668 11:48:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:21:27.669 11:48:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:27.669 11:48:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.669 11:48:58 -- common/autotest_common.sh@1197 -- # return 0 00:21:27.669 11:48:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:27.669 11:48:58 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:28.606 11:48:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:28.606 11:48:59 -- common/autotest_common.sh@1187 -- # local i=0 00:21:28.606 11:48:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:28.606 11:48:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:28.606 11:48:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:31.133 11:49:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:31.133 11:49:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:31.133 11:49:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:21:31.133 11:49:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:31.133 11:49:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.133 11:49:01 -- common/autotest_common.sh@1197 -- # return 0 00:21:31.133 11:49:01 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:31.133 [global] 00:21:31.133 thread=1 00:21:31.133 invalidate=1 00:21:31.133 rw=read 00:21:31.133 time_based=1 00:21:31.133 runtime=10 00:21:31.133 ioengine=libaio 00:21:31.133 direct=1 00:21:31.133 bs=262144 00:21:31.133 iodepth=64 00:21:31.133 norandommap=1 00:21:31.133 numjobs=1 00:21:31.133 00:21:31.134 [job0] 00:21:31.134 filename=/dev/nvme0n1 00:21:31.134 [job1] 00:21:31.134 filename=/dev/nvme10n1 00:21:31.134 [job2] 00:21:31.134 filename=/dev/nvme1n1 00:21:31.134 [job3] 00:21:31.134 filename=/dev/nvme2n1 00:21:31.134 [job4] 00:21:31.134 filename=/dev/nvme3n1 00:21:31.134 [job5] 00:21:31.134 filename=/dev/nvme4n1 00:21:31.134 [job6] 00:21:31.134 filename=/dev/nvme5n1 00:21:31.134 [job7] 00:21:31.134 filename=/dev/nvme6n1 00:21:31.134 [job8] 00:21:31.134 filename=/dev/nvme7n1 00:21:31.134 [job9] 00:21:31.134 filename=/dev/nvme8n1 00:21:31.134 [job10] 00:21:31.134 filename=/dev/nvme9n1 00:21:31.134 Could not set queue depth (nvme0n1) 00:21:31.134 Could not set queue depth (nvme10n1) 00:21:31.134 Could not set queue depth (nvme1n1) 00:21:31.134 Could not set queue depth (nvme2n1) 00:21:31.134 Could not set queue depth (nvme3n1) 00:21:31.134 Could not set queue depth (nvme4n1) 00:21:31.134 Could not set queue depth (nvme5n1) 00:21:31.134 Could not set queue depth (nvme6n1) 00:21:31.134 Could not set queue depth (nvme7n1) 00:21:31.134 Could not set queue depth (nvme8n1) 00:21:31.134 Could not set queue depth (nvme9n1) 00:21:31.392 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:31.392 fio-3.35 00:21:31.392 Starting 11 threads 00:21:43.614 00:21:43.614 job0: (groupid=0, jobs=1): err= 0: pid=3801529: Tue Dec 3 11:49:12 2024 00:21:43.614 read: IOPS=1344, BW=336MiB/s (352MB/s)(3373MiB/10037msec) 00:21:43.614 slat (usec): min=13, max=15032, avg=737.66, stdev=1792.74 00:21:43.614 clat (usec): min=11923, max=79508, avg=46831.93, stdev=6086.38 00:21:43.614 lat (usec): min=12182, max=81257, avg=47569.60, stdev=6344.11 00:21:43.614 clat percentiles (usec): 00:21:43.614 | 1.00th=[40633], 5.00th=[41681], 10.00th=[41681], 20.00th=[42730], 00:21:43.614 | 30.00th=[43254], 40.00th=[44303], 50.00th=[45876], 60.00th=[46400], 00:21:43.614 | 70.00th=[47449], 80.00th=[48497], 90.00th=[52691], 95.00th=[63177], 00:21:43.614 | 99.00th=[67634], 99.50th=[69731], 99.90th=[73925], 99.95th=[74974], 00:21:43.614 | 99.99th=[77071] 00:21:43.614 bw ( KiB/s): min=247791, max=376832, per=9.08%, avg=343755.95, stdev=35920.99, samples=20 00:21:43.614 iops : min= 967, max= 1472, avg=1342.75, stdev=140.45, samples=20 00:21:43.614 lat (msec) : 20=0.23%, 50=85.83%, 100=13.94% 00:21:43.614 cpu : usr=0.68%, sys=5.84%, ctx=2553, majf=0, minf=4097 00:21:43.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.614 issued rwts: total=13490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.614 job1: (groupid=0, jobs=1): err= 0: pid=3801531: Tue Dec 3 11:49:12 2024 00:21:43.614 read: IOPS=812, BW=203MiB/s (213MB/s)(2044MiB/10063msec) 00:21:43.614 slat (usec): min=11, max=49375, avg=1196.47, stdev=4388.04 00:21:43.614 clat (msec): min=6, max=150, avg=77.46, stdev=16.64 00:21:43.615 lat (msec): min=6, max=150, avg=78.66, stdev=17.36 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 63], 20.00th=[ 78], 00:21:43.615 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 84], 00:21:43.615 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:21:43.615 | 99.00th=[ 112], 99.50th=[ 124], 99.90th=[ 138], 99.95th=[ 144], 00:21:43.615 | 99.99th=[ 150] 00:21:43.615 bw ( KiB/s): min=175616, max=384256, per=5.49%, avg=207731.20, stdev=44222.84, samples=20 00:21:43.615 iops : min= 686, max= 1501, avg=811.45, stdev=172.75, samples=20 00:21:43.615 lat (msec) : 10=0.33%, 20=0.80%, 50=7.14%, 100=89.92%, 250=1.81% 00:21:43.615 cpu : usr=0.28%, sys=3.40%, ctx=1635, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=8176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job2: (groupid=0, jobs=1): err= 0: pid=3801532: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=780, BW=195MiB/s (205MB/s)(1962MiB/10061msec) 00:21:43.615 slat (usec): min=12, max=41625, avg=1271.69, stdev=4021.25 00:21:43.615 clat (msec): min=15, max=145, avg=80.66, stdev= 9.09 00:21:43.615 lat (msec): min=16, max=145, avg=81.93, stdev= 9.91 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 60], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 78], 00:21:43.615 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:21:43.615 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:21:43.615 | 99.00th=[ 102], 99.50th=[ 118], 99.90th=[ 130], 99.95th=[ 130], 00:21:43.615 | 99.99th=[ 146] 00:21:43.615 bw ( KiB/s): min=174592, max=233984, per=5.26%, avg=199321.60, stdev=15770.49, samples=20 00:21:43.615 iops : min= 682, max= 914, avg=778.60, stdev=61.60, samples=20 00:21:43.615 lat (msec) : 20=0.25%, 50=0.27%, 100=98.32%, 250=1.16% 00:21:43.615 cpu : usr=0.22%, sys=2.46%, ctx=1560, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=7849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job3: (groupid=0, jobs=1): err= 0: pid=3801533: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=838, BW=210MiB/s (220MB/s)(2109MiB/10063msec) 00:21:43.615 slat (usec): min=11, max=42082, avg=1161.37, stdev=3994.83 00:21:43.615 clat (usec): min=950, max=140004, avg=75105.55, stdev=19571.50 00:21:43.615 lat (usec): min=966, max=140058, avg=76266.92, stdev=20220.84 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 77], 00:21:43.615 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 84], 00:21:43.615 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 90], 00:21:43.615 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 136], 99.95th=[ 136], 00:21:43.615 | 99.99th=[ 140] 00:21:43.615 bw ( KiB/s): min=180224, max=391168, per=5.66%, avg=214348.80, stdev=50400.78, samples=20 00:21:43.615 iops : min= 704, max= 1528, avg=837.30, stdev=196.88, samples=20 00:21:43.615 lat (usec) : 1000=0.01% 00:21:43.615 lat (msec) : 2=0.33%, 4=0.11%, 20=1.07%, 50=11.39%, 100=84.95% 00:21:43.615 lat (msec) : 250=2.15% 00:21:43.615 cpu : usr=0.23%, sys=3.53%, ctx=1793, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=8436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job4: (groupid=0, jobs=1): err= 0: pid=3801534: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=780, BW=195MiB/s (205MB/s)(1963MiB/10062msec) 00:21:43.615 slat (usec): min=16, max=22904, avg=1269.03, stdev=3058.96 00:21:43.615 clat (msec): min=12, max=145, avg=80.65, stdev= 9.53 00:21:43.615 lat (msec): min=12, max=145, avg=81.92, stdev=10.00 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 58], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 78], 00:21:43.615 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:21:43.615 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:21:43.615 | 99.00th=[ 101], 99.50th=[ 109], 99.90th=[ 142], 99.95th=[ 142], 00:21:43.615 | 99.99th=[ 146] 00:21:43.615 bw ( KiB/s): min=179712, max=241152, per=5.27%, avg=199447.50, stdev=16305.24, samples=20 00:21:43.615 iops : min= 702, max= 942, avg=779.05, stdev=63.60, samples=20 00:21:43.615 lat (msec) : 20=0.43%, 50=0.39%, 100=98.13%, 250=1.04% 00:21:43.615 cpu : usr=0.33%, sys=3.99%, ctx=1546, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=7853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job5: (groupid=0, jobs=1): err= 0: pid=3801535: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=1343, BW=336MiB/s (352MB/s)(3372MiB/10038msec) 00:21:43.615 slat (usec): min=12, max=21808, avg=737.59, stdev=1921.95 00:21:43.615 clat (usec): min=11132, max=87168, avg=46846.08, stdev=6287.37 00:21:43.615 lat (usec): min=11409, max=87931, avg=47583.67, stdev=6571.55 00:21:43.615 clat percentiles (usec): 00:21:43.615 | 1.00th=[40633], 5.00th=[41681], 10.00th=[41681], 20.00th=[42730], 00:21:43.615 | 30.00th=[43254], 40.00th=[44303], 50.00th=[45876], 60.00th=[46400], 00:21:43.615 | 70.00th=[47449], 80.00th=[48497], 90.00th=[52691], 95.00th=[63177], 00:21:43.615 | 99.00th=[68682], 99.50th=[70779], 99.90th=[78119], 99.95th=[82314], 00:21:43.615 | 99.99th=[87557] 00:21:43.615 bw ( KiB/s): min=249344, max=377344, per=9.07%, avg=343654.40, stdev=36125.51, samples=20 00:21:43.615 iops : min= 974, max= 1474, avg=1342.40, stdev=141.12, samples=20 00:21:43.615 lat (msec) : 20=0.25%, 50=85.97%, 100=13.78% 00:21:43.615 cpu : usr=0.60%, sys=5.75%, ctx=2532, majf=0, minf=3659 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=13487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job6: (groupid=0, jobs=1): err= 0: pid=3801537: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=2049, BW=512MiB/s (537MB/s)(5137MiB/10023msec) 00:21:43.615 slat (usec): min=10, max=18406, avg=481.33, stdev=1155.73 00:21:43.615 clat (usec): min=12529, max=79946, avg=30706.97, stdev=7815.05 00:21:43.615 lat (usec): min=12775, max=84031, avg=31188.30, stdev=7981.55 00:21:43.615 clat percentiles (usec): 00:21:43.615 | 1.00th=[25560], 5.00th=[26346], 10.00th=[26608], 20.00th=[27395], 00:21:43.615 | 30.00th=[27919], 40.00th=[28443], 50.00th=[29230], 60.00th=[29754], 00:21:43.615 | 70.00th=[30278], 80.00th=[30802], 90.00th=[32113], 95.00th=[37487], 00:21:43.615 | 99.00th=[66847], 99.50th=[68682], 99.90th=[72877], 99.95th=[74974], 00:21:43.615 | 99.99th=[78119] 00:21:43.615 bw ( KiB/s): min=245248, max=592384, per=13.85%, avg=524390.40, stdev=97653.09, samples=20 00:21:43.615 iops : min= 958, max= 2314, avg=2048.40, stdev=381.46, samples=20 00:21:43.615 lat (msec) : 20=0.26%, 50=94.99%, 100=4.75% 00:21:43.615 cpu : usr=0.54%, sys=6.24%, ctx=4032, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=20547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job7: (groupid=0, jobs=1): err= 0: pid=3801543: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=2384, BW=596MiB/s (625MB/s)(5998MiB/10061msec) 00:21:43.615 slat (usec): min=11, max=70447, avg=415.93, stdev=3163.71 00:21:43.615 clat (msec): min=7, max=183, avg=26.39, stdev=25.15 00:21:43.615 lat (msec): min=7, max=183, avg=26.81, stdev=25.71 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 14], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:21:43.615 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 16], 00:21:43.615 | 70.00th=[ 17], 80.00th=[ 27], 90.00th=[ 85], 95.00th=[ 87], 00:21:43.615 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 153], 99.95th=[ 153], 00:21:43.615 | 99.99th=[ 155] 00:21:43.615 bw ( KiB/s): min=177664, max=1072128, per=16.18%, avg=612603.55, stdev=408061.61, samples=20 00:21:43.615 iops : min= 694, max= 4188, avg=2392.95, stdev=1593.98, samples=20 00:21:43.615 lat (msec) : 10=0.10%, 20=78.81%, 50=6.87%, 100=13.75%, 250=0.46% 00:21:43.615 cpu : usr=0.44%, sys=5.71%, ctx=4175, majf=0, minf=4097 00:21:43.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.615 issued rwts: total=23991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.615 job8: (groupid=0, jobs=1): err= 0: pid=3801544: Tue Dec 3 11:49:12 2024 00:21:43.615 read: IOPS=780, BW=195MiB/s (205MB/s)(1964MiB/10063msec) 00:21:43.615 slat (usec): min=16, max=35229, avg=1268.53, stdev=3363.98 00:21:43.615 clat (msec): min=12, max=142, avg=80.61, stdev= 9.51 00:21:43.615 lat (msec): min=12, max=142, avg=81.88, stdev=10.09 00:21:43.615 clat percentiles (msec): 00:21:43.615 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 78], 00:21:43.615 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:21:43.615 | 70.00th=[ 86], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:21:43.615 | 99.00th=[ 105], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 142], 00:21:43.615 | 99.99th=[ 142] 00:21:43.615 bw ( KiB/s): min=180736, max=241152, per=5.27%, avg=199550.20, stdev=16501.13, samples=20 00:21:43.615 iops : min= 706, max= 942, avg=779.45, stdev=64.35, samples=20 00:21:43.616 lat (msec) : 20=0.45%, 50=0.43%, 100=97.93%, 250=1.20% 00:21:43.616 cpu : usr=0.31%, sys=4.01%, ctx=1540, majf=0, minf=4097 00:21:43.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:43.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.616 issued rwts: total=7857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.616 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.616 job9: (groupid=0, jobs=1): err= 0: pid=3801545: Tue Dec 3 11:49:12 2024 00:21:43.616 read: IOPS=2364, BW=591MiB/s (620MB/s)(5924MiB/10024msec) 00:21:43.616 slat (usec): min=11, max=8133, avg=419.67, stdev=978.70 00:21:43.616 clat (usec): min=2447, max=49074, avg=26616.74, stdev=5618.13 00:21:43.616 lat (usec): min=2677, max=49094, avg=27036.41, stdev=5745.13 00:21:43.616 clat percentiles (usec): 00:21:43.616 | 1.00th=[13960], 5.00th=[14746], 10.00th=[15533], 20.00th=[26084], 00:21:43.616 | 30.00th=[27132], 40.00th=[27657], 50.00th=[28181], 60.00th=[28967], 00:21:43.616 | 70.00th=[29754], 80.00th=[30540], 90.00th=[31065], 95.00th=[31851], 00:21:43.616 | 99.00th=[34341], 99.50th=[35390], 99.90th=[43254], 99.95th=[47449], 00:21:43.616 | 99.99th=[49021] 00:21:43.616 bw ( KiB/s): min=526336, max=1055744, per=15.98%, avg=605133.35, stdev=151647.35, samples=20 00:21:43.616 iops : min= 2056, max= 4124, avg=2363.80, stdev=592.37, samples=20 00:21:43.616 lat (msec) : 4=0.05%, 10=0.22%, 20=17.35%, 50=82.38% 00:21:43.616 cpu : usr=0.55%, sys=6.90%, ctx=4595, majf=0, minf=4097 00:21:43.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:43.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.616 issued rwts: total=23697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.616 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.616 job10: (groupid=0, jobs=1): err= 0: pid=3801546: Tue Dec 3 11:49:12 2024 00:21:43.616 read: IOPS=1343, BW=336MiB/s (352MB/s)(3371MiB/10037msec) 00:21:43.616 slat (usec): min=12, max=15740, avg=738.52, stdev=1883.96 00:21:43.616 clat (usec): min=12061, max=79094, avg=46857.61, stdev=6134.29 00:21:43.616 lat (usec): min=12329, max=80748, avg=47596.13, stdev=6403.99 00:21:43.616 clat percentiles (usec): 00:21:43.616 | 1.00th=[40633], 5.00th=[41681], 10.00th=[41681], 20.00th=[42730], 00:21:43.616 | 30.00th=[43254], 40.00th=[44303], 50.00th=[45876], 60.00th=[46400], 00:21:43.616 | 70.00th=[47449], 80.00th=[48497], 90.00th=[52691], 95.00th=[62653], 00:21:43.616 | 99.00th=[68682], 99.50th=[69731], 99.90th=[76022], 99.95th=[78119], 00:21:43.616 | 99.99th=[79168] 00:21:43.616 bw ( KiB/s): min=246765, max=376320, per=9.07%, avg=343551.05, stdev=36914.50, samples=20 00:21:43.616 iops : min= 963, max= 1470, avg=1341.95, stdev=144.33, samples=20 00:21:43.616 lat (msec) : 20=0.22%, 50=85.56%, 100=14.22% 00:21:43.616 cpu : usr=0.51%, sys=5.71%, ctx=2537, majf=0, minf=4097 00:21:43.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:43.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:43.616 issued rwts: total=13482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.616 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:43.616 00:21:43.616 Run status group 0 (all jobs): 00:21:43.616 READ: bw=3698MiB/s (3878MB/s), 195MiB/s-596MiB/s (205MB/s-625MB/s), io=36.3GiB (39.0GB), run=10023-10063msec 00:21:43.616 00:21:43.616 Disk stats (read/write): 00:21:43.616 nvme0n1: ios=26901/0, merge=0/0, ticks=1237602/0, in_queue=1237602, util=95.74% 00:21:43.616 nvme10n1: ios=16264/0, merge=0/0, ticks=1235153/0, in_queue=1235153, util=96.06% 00:21:43.616 nvme1n1: ios=15578/0, merge=0/0, ticks=1235383/0, in_queue=1235383, util=96.52% 00:21:43.616 nvme2n1: ios=16768/0, merge=0/0, ticks=1236936/0, in_queue=1236936, util=96.82% 00:21:43.616 nvme3n1: ios=15623/0, merge=0/0, ticks=1237790/0, in_queue=1237790, util=96.94% 00:21:43.616 nvme4n1: ios=26932/0, merge=0/0, ticks=1238321/0, in_queue=1238321, util=97.56% 00:21:43.616 nvme5n1: ios=41060/0, merge=0/0, ticks=1235142/0, in_queue=1235142, util=97.80% 00:21:43.616 nvme6n1: ios=47858/0, merge=0/0, ticks=1227158/0, in_queue=1227158, util=98.00% 00:21:43.616 nvme7n1: ios=15614/0, merge=0/0, ticks=1237436/0, in_queue=1237436, util=98.74% 00:21:43.616 nvme8n1: ios=47344/0, merge=0/0, ticks=1232591/0, in_queue=1232591, util=99.06% 00:21:43.616 nvme9n1: ios=26863/0, merge=0/0, ticks=1236556/0, in_queue=1236556, util=99.30% 00:21:43.616 11:49:12 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:43.616 [global] 00:21:43.616 thread=1 00:21:43.616 invalidate=1 00:21:43.616 rw=randwrite 00:21:43.616 time_based=1 00:21:43.616 runtime=10 00:21:43.616 ioengine=libaio 00:21:43.616 direct=1 00:21:43.616 bs=262144 00:21:43.616 iodepth=64 00:21:43.616 norandommap=1 00:21:43.616 numjobs=1 00:21:43.616 00:21:43.616 [job0] 00:21:43.616 filename=/dev/nvme0n1 00:21:43.616 [job1] 00:21:43.616 filename=/dev/nvme10n1 00:21:43.616 [job2] 00:21:43.616 filename=/dev/nvme1n1 00:21:43.616 [job3] 00:21:43.616 filename=/dev/nvme2n1 00:21:43.616 [job4] 00:21:43.616 filename=/dev/nvme3n1 00:21:43.616 [job5] 00:21:43.616 filename=/dev/nvme4n1 00:21:43.616 [job6] 00:21:43.616 filename=/dev/nvme5n1 00:21:43.616 [job7] 00:21:43.616 filename=/dev/nvme6n1 00:21:43.616 [job8] 00:21:43.616 filename=/dev/nvme7n1 00:21:43.616 [job9] 00:21:43.616 filename=/dev/nvme8n1 00:21:43.616 [job10] 00:21:43.616 filename=/dev/nvme9n1 00:21:43.616 Could not set queue depth (nvme0n1) 00:21:43.616 Could not set queue depth (nvme10n1) 00:21:43.616 Could not set queue depth (nvme1n1) 00:21:43.616 Could not set queue depth (nvme2n1) 00:21:43.616 Could not set queue depth (nvme3n1) 00:21:43.616 Could not set queue depth (nvme4n1) 00:21:43.616 Could not set queue depth (nvme5n1) 00:21:43.616 Could not set queue depth (nvme6n1) 00:21:43.616 Could not set queue depth (nvme7n1) 00:21:43.616 Could not set queue depth (nvme8n1) 00:21:43.616 Could not set queue depth (nvme9n1) 00:21:43.616 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:43.616 fio-3.35 00:21:43.616 Starting 11 threads 00:21:53.682 00:21:53.682 job0: (groupid=0, jobs=1): err= 0: pid=3803287: Tue Dec 3 11:49:23 2024 00:21:53.682 write: IOPS=1528, BW=382MiB/s (401MB/s)(3838MiB/10043msec); 0 zone resets 00:21:53.682 slat (usec): min=22, max=10489, avg=644.08, stdev=1219.72 00:21:53.682 clat (usec): min=4291, max=93448, avg=41212.55, stdev=8617.96 00:21:53.682 lat (usec): min=4355, max=98362, avg=41856.63, stdev=8729.32 00:21:53.682 clat percentiles (usec): 00:21:53.682 | 1.00th=[32637], 5.00th=[34341], 10.00th=[34866], 20.00th=[35914], 00:21:53.682 | 30.00th=[36439], 40.00th=[36963], 50.00th=[37487], 60.00th=[38011], 00:21:53.682 | 70.00th=[38536], 80.00th=[51643], 90.00th=[53740], 95.00th=[55837], 00:21:53.682 | 99.00th=[68682], 99.50th=[71828], 99.90th=[82314], 99.95th=[90702], 00:21:53.682 | 99.99th=[93848] 00:21:53.682 bw ( KiB/s): min=281548, max=444416, per=11.15%, avg=391344.60, stdev=59768.55, samples=20 00:21:53.682 iops : min= 1099, max= 1736, avg=1528.65, stdev=233.55, samples=20 00:21:53.682 lat (msec) : 10=0.10%, 20=0.19%, 50=75.38%, 100=24.33% 00:21:53.682 cpu : usr=3.32%, sys=5.25%, ctx=3774, majf=0, minf=1 00:21:53.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.682 issued rwts: total=0,15351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job1: (groupid=0, jobs=1): err= 0: pid=3803299: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=925, BW=231MiB/s (243MB/s)(2328MiB/10063msec); 0 zone resets 00:21:53.683 slat (usec): min=24, max=17213, avg=1068.75, stdev=2179.44 00:21:53.683 clat (msec): min=4, max=152, avg=68.06, stdev=16.73 00:21:53.683 lat (msec): min=4, max=152, avg=69.13, stdev=17.03 00:21:53.683 clat percentiles (msec): 00:21:53.683 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 55], 00:21:53.683 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 71], 00:21:53.683 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 90], 95.00th=[ 93], 00:21:53.683 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 136], 99.95th=[ 142], 00:21:53.683 | 99.99th=[ 153] 00:21:53.683 bw ( KiB/s): min=176640, max=299520, per=6.74%, avg=236809.70, stdev=55233.55, samples=20 00:21:53.683 iops : min= 690, max= 1170, avg=925.00, stdev=215.77, samples=20 00:21:53.683 lat (msec) : 10=0.01%, 20=0.15%, 50=0.75%, 100=98.27%, 250=0.82% 00:21:53.683 cpu : usr=2.11%, sys=4.45%, ctx=2299, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,9313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job2: (groupid=0, jobs=1): err= 0: pid=3803300: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=2261, BW=565MiB/s (593MB/s)(5677MiB/10041msec); 0 zone resets 00:21:53.683 slat (usec): min=15, max=39282, avg=434.16, stdev=959.92 00:21:53.683 clat (usec): min=4179, max=99521, avg=27852.87, stdev=14391.06 00:21:53.683 lat (usec): min=4233, max=99624, avg=28287.03, stdev=14607.16 00:21:53.683 clat percentiles (usec): 00:21:53.683 | 1.00th=[14746], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:21:53.683 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[19530], 00:21:53.683 | 70.00th=[35914], 80.00th=[38011], 90.00th=[54789], 95.00th=[56886], 00:21:53.683 | 99.00th=[65799], 99.50th=[69731], 99.90th=[79168], 99.95th=[87557], 00:21:53.683 | 99.99th=[94897] 00:21:53.683 bw ( KiB/s): min=280015, max=882688, per=16.51%, avg=579709.55, stdev=252382.08, samples=20 00:21:53.683 iops : min= 1093, max= 3448, avg=2264.45, stdev=985.92, samples=20 00:21:53.683 lat (msec) : 10=0.53%, 20=62.46%, 50=21.89%, 100=15.12% 00:21:53.683 cpu : usr=3.53%, sys=5.77%, ctx=4898, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,22709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job3: (groupid=0, jobs=1): err= 0: pid=3803301: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=924, BW=231MiB/s (242MB/s)(2326MiB/10064msec); 0 zone resets 00:21:53.683 slat (usec): min=27, max=22675, avg=1069.68, stdev=2244.40 00:21:53.683 clat (msec): min=13, max=147, avg=68.12, stdev=16.70 00:21:53.683 lat (msec): min=13, max=153, avg=69.19, stdev=17.00 00:21:53.683 clat percentiles (msec): 00:21:53.683 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 55], 00:21:53.683 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 71], 00:21:53.683 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 90], 95.00th=[ 92], 00:21:53.683 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 138], 99.95th=[ 144], 00:21:53.683 | 99.99th=[ 148] 00:21:53.683 bw ( KiB/s): min=174080, max=298496, per=6.74%, avg=236604.40, stdev=55038.74, samples=20 00:21:53.683 iops : min= 680, max= 1166, avg=924.20, stdev=215.01, samples=20 00:21:53.683 lat (msec) : 20=0.09%, 50=0.59%, 100=98.40%, 250=0.92% 00:21:53.683 cpu : usr=2.34%, sys=4.13%, ctx=2292, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,9305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job4: (groupid=0, jobs=1): err= 0: pid=3803302: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=1223, BW=306MiB/s (321MB/s)(3067MiB/10029msec); 0 zone resets 00:21:53.683 slat (usec): min=21, max=11114, avg=810.16, stdev=1417.11 00:21:53.683 clat (usec): min=15771, max=80211, avg=51489.45, stdev=8482.95 00:21:53.683 lat (usec): min=15826, max=80287, avg=52299.61, stdev=8578.36 00:21:53.683 clat percentiles (usec): 00:21:53.683 | 1.00th=[32375], 5.00th=[34866], 10.00th=[36439], 20.00th=[38011], 00:21:53.683 | 30.00th=[52691], 40.00th=[54264], 50.00th=[55313], 60.00th=[55837], 00:21:53.683 | 70.00th=[56361], 80.00th=[56886], 90.00th=[57934], 95.00th=[58983], 00:21:53.683 | 99.00th=[62653], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:21:53.683 | 99.99th=[74974] 00:21:53.683 bw ( KiB/s): min=262156, max=455792, per=8.90%, avg=312428.60, stdev=57228.28, samples=20 00:21:53.683 iops : min= 1024, max= 1780, avg=1220.40, stdev=223.49, samples=20 00:21:53.683 lat (msec) : 20=0.06%, 50=22.02%, 100=77.92% 00:21:53.683 cpu : usr=2.88%, sys=5.31%, ctx=3056, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,12268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job5: (groupid=0, jobs=1): err= 0: pid=3803303: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=943, BW=236MiB/s (247MB/s)(2373MiB/10064msec); 0 zone resets 00:21:53.683 slat (usec): min=23, max=26343, avg=1038.78, stdev=2286.15 00:21:53.683 clat (msec): min=4, max=150, avg=66.80, stdev=17.33 00:21:53.683 lat (msec): min=4, max=150, avg=67.84, stdev=17.65 00:21:53.683 clat percentiles (msec): 00:21:53.683 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 52], 20.00th=[ 53], 00:21:53.683 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:21:53.683 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 90], 95.00th=[ 92], 00:21:53.683 | 99.00th=[ 101], 99.50th=[ 110], 99.90th=[ 138], 99.95th=[ 148], 00:21:53.683 | 99.99th=[ 150] 00:21:53.683 bw ( KiB/s): min=174080, max=306688, per=6.87%, avg=241365.80, stdev=58051.52, samples=20 00:21:53.683 iops : min= 680, max= 1198, avg=942.80, stdev=226.78, samples=20 00:21:53.683 lat (msec) : 10=0.06%, 20=0.13%, 50=1.77%, 100=97.05%, 250=0.99% 00:21:53.683 cpu : usr=2.10%, sys=4.13%, ctx=2329, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,9491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job6: (groupid=0, jobs=1): err= 0: pid=3803304: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=1554, BW=389MiB/s (408MB/s)(3903MiB/10042msec); 0 zone resets 00:21:53.683 slat (usec): min=21, max=12613, avg=636.80, stdev=1208.89 00:21:53.683 clat (usec): min=4835, max=93956, avg=40515.46, stdev=8796.25 00:21:53.683 lat (usec): min=4877, max=95472, avg=41152.26, stdev=8905.75 00:21:53.683 clat percentiles (usec): 00:21:53.683 | 1.00th=[18220], 5.00th=[33817], 10.00th=[34866], 20.00th=[35914], 00:21:53.683 | 30.00th=[36439], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:21:53.683 | 70.00th=[38536], 80.00th=[51119], 90.00th=[53740], 95.00th=[55313], 00:21:53.683 | 99.00th=[66323], 99.50th=[69731], 99.90th=[80217], 99.95th=[87557], 00:21:53.683 | 99.99th=[93848] 00:21:53.683 bw ( KiB/s): min=282570, max=536625, per=11.34%, avg=398054.15, stdev=67368.93, samples=20 00:21:53.683 iops : min= 1103, max= 2096, avg=1554.85, stdev=263.21, samples=20 00:21:53.683 lat (msec) : 10=0.12%, 20=1.67%, 50=75.08%, 100=23.13% 00:21:53.683 cpu : usr=3.49%, sys=5.20%, ctx=3715, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,15611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job7: (groupid=0, jobs=1): err= 0: pid=3803305: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=1027, BW=257MiB/s (269MB/s)(2586MiB/10063msec); 0 zone resets 00:21:53.683 slat (usec): min=23, max=20681, avg=956.58, stdev=2051.27 00:21:53.683 clat (msec): min=15, max=145, avg=61.29, stdev=21.49 00:21:53.683 lat (msec): min=15, max=145, avg=62.25, stdev=21.85 00:21:53.683 clat percentiles (msec): 00:21:53.683 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:21:53.683 | 30.00th=[ 40], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 59], 00:21:53.683 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 90], 95.00th=[ 92], 00:21:53.683 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 133], 99.95th=[ 138], 00:21:53.683 | 99.99th=[ 146] 00:21:53.683 bw ( KiB/s): min=180736, max=441344, per=7.49%, avg=263142.40, stdev=94435.68, samples=20 00:21:53.683 iops : min= 706, max= 1724, avg=1027.90, stdev=368.89, samples=20 00:21:53.683 lat (msec) : 20=0.08%, 50=32.08%, 100=67.13%, 250=0.71% 00:21:53.683 cpu : usr=2.20%, sys=4.16%, ctx=2500, majf=0, minf=1 00:21:53.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.683 issued rwts: total=0,10342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.683 job8: (groupid=0, jobs=1): err= 0: pid=3803306: Tue Dec 3 11:49:23 2024 00:21:53.683 write: IOPS=922, BW=231MiB/s (242MB/s)(2321MiB/10064msec); 0 zone resets 00:21:53.684 slat (usec): min=25, max=27216, avg=1073.06, stdev=2335.78 00:21:53.684 clat (msec): min=11, max=149, avg=68.29, stdev=16.81 00:21:53.684 lat (msec): min=11, max=149, avg=69.36, stdev=17.12 00:21:53.684 clat percentiles (msec): 00:21:53.684 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:21:53.684 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 71], 00:21:53.684 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 90], 95.00th=[ 93], 00:21:53.684 | 99.00th=[ 101], 99.50th=[ 110], 99.90th=[ 133], 99.95th=[ 142], 00:21:53.684 | 99.99th=[ 150] 00:21:53.684 bw ( KiB/s): min=176640, max=299520, per=6.72%, avg=236041.45, stdev=55033.49, samples=20 00:21:53.684 iops : min= 690, max= 1170, avg=922.00, stdev=214.99, samples=20 00:21:53.684 lat (msec) : 20=0.13%, 50=0.56%, 100=98.21%, 250=1.10% 00:21:53.684 cpu : usr=2.17%, sys=3.76%, ctx=2257, majf=0, minf=1 00:21:53.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:53.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.684 issued rwts: total=0,9283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.684 job9: (groupid=0, jobs=1): err= 0: pid=3803307: Tue Dec 3 11:49:23 2024 00:21:53.684 write: IOPS=1221, BW=305MiB/s (320MB/s)(3064MiB/10028msec); 0 zone resets 00:21:53.684 slat (usec): min=22, max=14674, avg=811.01, stdev=1414.99 00:21:53.684 clat (usec): min=19377, max=84955, avg=51545.51, stdev=8474.54 00:21:53.684 lat (usec): min=19433, max=85024, avg=52356.53, stdev=8572.99 00:21:53.684 clat percentiles (usec): 00:21:53.684 | 1.00th=[32637], 5.00th=[34866], 10.00th=[36439], 20.00th=[38536], 00:21:53.684 | 30.00th=[52691], 40.00th=[54264], 50.00th=[55313], 60.00th=[55837], 00:21:53.684 | 70.00th=[56361], 80.00th=[56886], 90.00th=[57934], 95.00th=[58983], 00:21:53.684 | 99.00th=[62129], 99.50th=[66847], 99.90th=[74974], 99.95th=[77071], 00:21:53.684 | 99.99th=[84411] 00:21:53.684 bw ( KiB/s): min=260608, max=459264, per=8.89%, avg=312089.60, stdev=57495.49, samples=20 00:21:53.684 iops : min= 1018, max= 1794, avg=1219.10, stdev=224.59, samples=20 00:21:53.684 lat (msec) : 20=0.04%, 50=22.05%, 100=77.91% 00:21:53.684 cpu : usr=2.81%, sys=5.30%, ctx=3050, majf=0, minf=1 00:21:53.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.684 issued rwts: total=0,12254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.684 job10: (groupid=0, jobs=1): err= 0: pid=3803308: Tue Dec 3 11:49:23 2024 00:21:53.684 write: IOPS=1208, BW=302MiB/s (317MB/s)(3025MiB/10016msec); 0 zone resets 00:21:53.684 slat (usec): min=19, max=23568, avg=805.63, stdev=1493.43 00:21:53.684 clat (usec): min=462, max=92324, avg=52147.13, stdev=10560.29 00:21:53.684 lat (usec): min=516, max=92395, avg=52952.75, stdev=10710.27 00:21:53.684 clat percentiles (usec): 00:21:53.684 | 1.00th=[14353], 5.00th=[33162], 10.00th=[35390], 20.00th=[52167], 00:21:53.684 | 30.00th=[53740], 40.00th=[54789], 50.00th=[55313], 60.00th=[55837], 00:21:53.684 | 70.00th=[56886], 80.00th=[57410], 90.00th=[58459], 95.00th=[60031], 00:21:53.684 | 99.00th=[69731], 99.50th=[71828], 99.90th=[74974], 99.95th=[77071], 00:21:53.684 | 99.99th=[87557] 00:21:53.684 bw ( KiB/s): min=266773, max=527872, per=8.78%, avg=308171.30, stdev=63107.17, samples=20 00:21:53.684 iops : min= 1042, max= 2062, avg=1203.75, stdev=246.53, samples=20 00:21:53.684 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.01% 00:21:53.684 lat (msec) : 2=0.12%, 4=0.41%, 10=0.31%, 20=2.65%, 50=12.20% 00:21:53.684 lat (msec) : 100=84.26% 00:21:53.684 cpu : usr=2.76%, sys=5.08%, ctx=3093, majf=0, minf=1 00:21:53.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:53.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:53.684 issued rwts: total=0,12101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:53.684 00:21:53.684 Run status group 0 (all jobs): 00:21:53.684 WRITE: bw=3429MiB/s (3595MB/s), 231MiB/s-565MiB/s (242MB/s-593MB/s), io=33.7GiB (36.2GB), run=10016-10064msec 00:21:53.684 00:21:53.684 Disk stats (read/write): 00:21:53.684 nvme0n1: ios=49/30275, merge=0/0, ticks=19/1221158, in_queue=1221177, util=96.78% 00:21:53.684 nvme10n1: ios=0/18336, merge=0/0, ticks=0/1213763, in_queue=1213763, util=96.95% 00:21:53.684 nvme1n1: ios=0/44977, merge=0/0, ticks=0/1225014, in_queue=1225014, util=97.26% 00:21:53.684 nvme2n1: ios=0/18312, merge=0/0, ticks=0/1213453, in_queue=1213453, util=97.45% 00:21:53.684 nvme3n1: ios=0/23995, merge=0/0, ticks=0/1218597, in_queue=1218597, util=97.52% 00:21:53.684 nvme4n1: ios=0/18684, merge=0/0, ticks=0/1214249, in_queue=1214249, util=97.90% 00:21:53.684 nvme5n1: ios=0/30790, merge=0/0, ticks=0/1220399, in_queue=1220399, util=98.07% 00:21:53.684 nvme6n1: ios=0/20374, merge=0/0, ticks=0/1214802, in_queue=1214802, util=98.19% 00:21:53.684 nvme7n1: ios=0/18267, merge=0/0, ticks=0/1213247, in_queue=1213247, util=98.64% 00:21:53.684 nvme8n1: ios=0/23968, merge=0/0, ticks=0/1218530, in_queue=1218530, util=98.83% 00:21:53.684 nvme9n1: ios=0/23337, merge=0/0, ticks=0/1218316, in_queue=1218316, util=98.99% 00:21:53.684 11:49:23 -- target/multiconnection.sh@36 -- # sync 00:21:53.684 11:49:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:53.684 11:49:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.684 11:49:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:53.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:53.942 11:49:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:53.942 11:49:24 -- common/autotest_common.sh@1208 -- # local i=0 00:21:53.942 11:49:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:53.942 11:49:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:21:53.942 11:49:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:53.942 11:49:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:21:53.942 11:49:24 -- common/autotest_common.sh@1220 -- # return 0 00:21:53.942 11:49:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.942 11:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.942 11:49:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.942 11:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.942 11:49:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.942 11:49:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:54.876 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:54.876 11:49:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:54.876 11:49:25 -- common/autotest_common.sh@1208 -- # local i=0 00:21:54.876 11:49:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:54.876 11:49:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:21:54.876 11:49:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:54.876 11:49:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:21:54.876 11:49:25 -- common/autotest_common.sh@1220 -- # return 0 00:21:54.876 11:49:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:54.876 11:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.876 11:49:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 11:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.133 11:49:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.133 11:49:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:56.068 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:56.068 11:49:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:56.068 11:49:26 -- common/autotest_common.sh@1208 -- # local i=0 00:21:56.068 11:49:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:56.068 11:49:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:21:56.068 11:49:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:21:56.068 11:49:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:56.068 11:49:26 -- common/autotest_common.sh@1220 -- # return 0 00:21:56.068 11:49:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:56.068 11:49:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.068 11:49:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.068 11:49:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.068 11:49:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.068 11:49:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:57.004 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:57.004 11:49:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:57.004 11:49:27 -- common/autotest_common.sh@1208 -- # local i=0 00:21:57.004 11:49:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:57.004 11:49:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:21:57.004 11:49:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:21:57.004 11:49:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:57.004 11:49:27 -- common/autotest_common.sh@1220 -- # return 0 00:21:57.004 11:49:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:57.004 11:49:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.004 11:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:57.004 11:49:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.004 11:49:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.004 11:49:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:57.940 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:57.940 11:49:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:57.940 11:49:28 -- common/autotest_common.sh@1208 -- # local i=0 00:21:57.940 11:49:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:57.940 11:49:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:21:58.197 11:49:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:58.197 11:49:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:21:58.197 11:49:28 -- common/autotest_common.sh@1220 -- # return 0 00:21:58.197 11:49:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:58.197 11:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.197 11:49:28 -- common/autotest_common.sh@10 -- # set +x 00:21:58.197 11:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.197 11:49:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.197 11:49:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:59.129 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:59.129 11:49:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:59.129 11:49:29 -- common/autotest_common.sh@1208 -- # local i=0 00:21:59.129 11:49:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:59.129 11:49:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:21:59.129 11:49:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:59.129 11:49:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:21:59.129 11:49:29 -- common/autotest_common.sh@1220 -- # return 0 00:21:59.129 11:49:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:59.129 11:49:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.129 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:21:59.129 11:49:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.129 11:49:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.129 11:49:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:00.107 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:00.107 11:49:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:00.107 11:49:30 -- common/autotest_common.sh@1208 -- # local i=0 00:22:00.107 11:49:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:00.107 11:49:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:00.107 11:49:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:00.107 11:49:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:00.107 11:49:30 -- common/autotest_common.sh@1220 -- # return 0 00:22:00.107 11:49:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:00.107 11:49:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.107 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.107 11:49:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.107 11:49:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.107 11:49:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:01.038 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:01.038 11:49:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:01.038 11:49:31 -- common/autotest_common.sh@1208 -- # local i=0 00:22:01.038 11:49:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:01.038 11:49:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:01.038 11:49:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:01.038 11:49:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:01.038 11:49:31 -- common/autotest_common.sh@1220 -- # return 0 00:22:01.038 11:49:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:01.038 11:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.038 11:49:31 -- common/autotest_common.sh@10 -- # set +x 00:22:01.296 11:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.296 11:49:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.296 11:49:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:02.230 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:02.230 11:49:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:02.230 11:49:32 -- common/autotest_common.sh@1208 -- # local i=0 00:22:02.230 11:49:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:02.230 11:49:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:02.230 11:49:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:02.230 11:49:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:02.230 11:49:32 -- common/autotest_common.sh@1220 -- # return 0 00:22:02.230 11:49:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:02.230 11:49:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.230 11:49:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.230 11:49:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.230 11:49:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.230 11:49:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:03.164 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:03.164 11:49:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:03.164 11:49:33 -- common/autotest_common.sh@1208 -- # local i=0 00:22:03.164 11:49:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:03.164 11:49:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:03.164 11:49:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:03.164 11:49:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:03.164 11:49:33 -- common/autotest_common.sh@1220 -- # return 0 00:22:03.164 11:49:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:03.164 11:49:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.164 11:49:33 -- common/autotest_common.sh@10 -- # set +x 00:22:03.164 11:49:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.164 11:49:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.164 11:49:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:04.098 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:04.098 11:49:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:04.098 11:49:34 -- common/autotest_common.sh@1208 -- # local i=0 00:22:04.098 11:49:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:04.098 11:49:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:04.098 11:49:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:04.098 11:49:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:04.098 11:49:34 -- common/autotest_common.sh@1220 -- # return 0 00:22:04.098 11:49:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:04.098 11:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.098 11:49:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.098 11:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.098 11:49:34 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:04.098 11:49:34 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:04.098 11:49:34 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:04.098 11:49:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:04.098 11:49:34 -- nvmf/common.sh@116 -- # sync 00:22:04.098 11:49:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:04.098 11:49:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:04.098 11:49:34 -- nvmf/common.sh@119 -- # set +e 00:22:04.098 11:49:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:04.098 11:49:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:04.098 rmmod nvme_rdma 00:22:04.098 rmmod nvme_fabrics 00:22:04.098 11:49:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:04.098 11:49:34 -- nvmf/common.sh@123 -- # set -e 00:22:04.098 11:49:34 -- nvmf/common.sh@124 -- # return 0 00:22:04.098 11:49:34 -- nvmf/common.sh@477 -- # '[' -n 3795153 ']' 00:22:04.098 11:49:34 -- nvmf/common.sh@478 -- # killprocess 3795153 00:22:04.098 11:49:34 -- common/autotest_common.sh@936 -- # '[' -z 3795153 ']' 00:22:04.098 11:49:34 -- common/autotest_common.sh@940 -- # kill -0 3795153 00:22:04.098 11:49:34 -- common/autotest_common.sh@941 -- # uname 00:22:04.098 11:49:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.098 11:49:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3795153 00:22:04.355 11:49:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:04.355 11:49:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:04.355 11:49:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3795153' 00:22:04.355 killing process with pid 3795153 00:22:04.355 11:49:34 -- common/autotest_common.sh@955 -- # kill 3795153 00:22:04.355 11:49:34 -- common/autotest_common.sh@960 -- # wait 3795153 00:22:04.924 11:49:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:04.924 11:49:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:04.924 00:22:04.924 real 1m15.847s 00:22:04.924 user 4m56.236s 00:22:04.924 sys 0m19.568s 00:22:04.924 11:49:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:04.924 11:49:35 -- common/autotest_common.sh@10 -- # set +x 00:22:04.924 ************************************ 00:22:04.924 END TEST nvmf_multiconnection 00:22:04.924 ************************************ 00:22:04.924 11:49:35 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:04.924 11:49:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:04.924 11:49:35 -- common/autotest_common.sh@10 -- # set +x 00:22:04.924 ************************************ 00:22:04.924 START TEST nvmf_initiator_timeout 00:22:04.924 ************************************ 00:22:04.924 11:49:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:04.924 * Looking for test storage... 00:22:04.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:04.924 11:49:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:04.924 11:49:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:04.924 11:49:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:04.924 11:49:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:04.924 11:49:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:04.924 11:49:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:04.924 11:49:35 -- scripts/common.sh@335 -- # IFS=.-: 00:22:04.924 11:49:35 -- scripts/common.sh@335 -- # read -ra ver1 00:22:04.924 11:49:35 -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.924 11:49:35 -- scripts/common.sh@336 -- # read -ra ver2 00:22:04.924 11:49:35 -- scripts/common.sh@337 -- # local 'op=<' 00:22:04.924 11:49:35 -- scripts/common.sh@339 -- # ver1_l=2 00:22:04.924 11:49:35 -- scripts/common.sh@340 -- # ver2_l=1 00:22:04.924 11:49:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:04.924 11:49:35 -- scripts/common.sh@343 -- # case "$op" in 00:22:04.924 11:49:35 -- scripts/common.sh@344 -- # : 1 00:22:04.924 11:49:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:04.924 11:49:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.924 11:49:35 -- scripts/common.sh@364 -- # decimal 1 00:22:04.924 11:49:35 -- scripts/common.sh@352 -- # local d=1 00:22:04.924 11:49:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.924 11:49:35 -- scripts/common.sh@354 -- # echo 1 00:22:04.924 11:49:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:04.924 11:49:35 -- scripts/common.sh@365 -- # decimal 2 00:22:04.924 11:49:35 -- scripts/common.sh@352 -- # local d=2 00:22:04.924 11:49:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.924 11:49:35 -- scripts/common.sh@354 -- # echo 2 00:22:04.924 11:49:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:04.924 11:49:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:04.924 11:49:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:04.924 11:49:35 -- scripts/common.sh@367 -- # return 0 00:22:04.924 11:49:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.924 --rc genhtml_branch_coverage=1 00:22:04.924 --rc genhtml_function_coverage=1 00:22:04.924 --rc genhtml_legend=1 00:22:04.924 --rc geninfo_all_blocks=1 00:22:04.924 --rc geninfo_unexecuted_blocks=1 00:22:04.924 00:22:04.924 ' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.924 --rc genhtml_branch_coverage=1 00:22:04.924 --rc genhtml_function_coverage=1 00:22:04.924 --rc genhtml_legend=1 00:22:04.924 --rc geninfo_all_blocks=1 00:22:04.924 --rc geninfo_unexecuted_blocks=1 00:22:04.924 00:22:04.924 ' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.924 --rc genhtml_branch_coverage=1 00:22:04.924 --rc genhtml_function_coverage=1 00:22:04.924 --rc genhtml_legend=1 00:22:04.924 --rc geninfo_all_blocks=1 00:22:04.924 --rc geninfo_unexecuted_blocks=1 00:22:04.924 00:22:04.924 ' 00:22:04.924 11:49:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.924 --rc genhtml_branch_coverage=1 00:22:04.924 --rc genhtml_function_coverage=1 00:22:04.924 --rc genhtml_legend=1 00:22:04.924 --rc geninfo_all_blocks=1 00:22:04.924 --rc geninfo_unexecuted_blocks=1 00:22:04.924 00:22:04.924 ' 00:22:04.924 11:49:35 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.924 11:49:35 -- nvmf/common.sh@7 -- # uname -s 00:22:04.924 11:49:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.924 11:49:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.924 11:49:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.924 11:49:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.924 11:49:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.924 11:49:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.924 11:49:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.924 11:49:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.924 11:49:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.924 11:49:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.924 11:49:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.924 11:49:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:04.924 11:49:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.924 11:49:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.924 11:49:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.924 11:49:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:04.924 11:49:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.924 11:49:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.924 11:49:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.924 11:49:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.925 11:49:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.925 11:49:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.925 11:49:35 -- paths/export.sh@5 -- # export PATH 00:22:04.925 11:49:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.925 11:49:35 -- nvmf/common.sh@46 -- # : 0 00:22:04.925 11:49:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:04.925 11:49:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:04.925 11:49:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:04.925 11:49:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.925 11:49:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.925 11:49:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:04.925 11:49:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:04.925 11:49:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:04.925 11:49:35 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:04.925 11:49:35 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:04.925 11:49:35 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:04.925 11:49:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:04.925 11:49:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.925 11:49:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:04.925 11:49:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:04.925 11:49:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:04.925 11:49:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.925 11:49:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.925 11:49:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.925 11:49:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:04.925 11:49:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:04.925 11:49:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:04.925 11:49:35 -- common/autotest_common.sh@10 -- # set +x 00:22:13.047 11:49:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:13.047 11:49:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:13.047 11:49:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:13.047 11:49:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:13.047 11:49:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:13.047 11:49:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:13.047 11:49:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:13.047 11:49:42 -- nvmf/common.sh@294 -- # net_devs=() 00:22:13.047 11:49:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:13.047 11:49:42 -- nvmf/common.sh@295 -- # e810=() 00:22:13.047 11:49:42 -- nvmf/common.sh@295 -- # local -ga e810 00:22:13.047 11:49:42 -- nvmf/common.sh@296 -- # x722=() 00:22:13.047 11:49:42 -- nvmf/common.sh@296 -- # local -ga x722 00:22:13.047 11:49:42 -- nvmf/common.sh@297 -- # mlx=() 00:22:13.047 11:49:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:13.047 11:49:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.047 11:49:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.047 11:49:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.048 11:49:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:13.048 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:13.048 11:49:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:13.048 11:49:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:13.048 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:13.048 11:49:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:13.048 11:49:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.048 11:49:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.048 11:49:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:13.048 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.048 11:49:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.048 11:49:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:13.048 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.048 11:49:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:13.048 11:49:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:13.048 11:49:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:13.048 11:49:42 -- nvmf/common.sh@57 -- # uname 00:22:13.048 11:49:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:13.048 11:49:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:13.048 11:49:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:13.048 11:49:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:13.048 11:49:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:13.048 11:49:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:13.048 11:49:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:13.048 11:49:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:13.048 11:49:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:13.048 11:49:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:13.048 11:49:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:13.048 11:49:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:13.048 11:49:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:13.048 11:49:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:13.048 11:49:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:13.048 11:49:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@104 -- # continue 2 00:22:13.048 11:49:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@104 -- # continue 2 00:22:13.048 11:49:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:13.048 11:49:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:13.048 11:49:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:13.048 11:49:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:13.048 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:13.048 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:13.048 altname enp217s0f0np0 00:22:13.048 altname ens818f0np0 00:22:13.048 inet 192.168.100.8/24 scope global mlx_0_0 00:22:13.048 valid_lft forever preferred_lft forever 00:22:13.048 11:49:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:13.048 11:49:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:13.048 11:49:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:13.048 11:49:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:13.048 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:13.048 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:13.048 altname enp217s0f1np1 00:22:13.048 altname ens818f1np1 00:22:13.048 inet 192.168.100.9/24 scope global mlx_0_1 00:22:13.048 valid_lft forever preferred_lft forever 00:22:13.048 11:49:42 -- nvmf/common.sh@410 -- # return 0 00:22:13.048 11:49:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.048 11:49:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:13.048 11:49:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:13.048 11:49:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:13.048 11:49:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:13.048 11:49:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:13.048 11:49:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:13.048 11:49:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:13.048 11:49:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:13.048 11:49:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@104 -- # continue 2 00:22:13.048 11:49:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:13.048 11:49:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:13.048 11:49:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@104 -- # continue 2 00:22:13.048 11:49:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:13.048 11:49:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:13.048 11:49:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:13.048 11:49:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:13.048 11:49:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:13.048 11:49:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:13.049 11:49:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:13.049 11:49:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:13.049 11:49:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:13.049 11:49:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:13.049 192.168.100.9' 00:22:13.049 11:49:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:13.049 192.168.100.9' 00:22:13.049 11:49:42 -- nvmf/common.sh@445 -- # head -n 1 00:22:13.049 11:49:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:13.049 11:49:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:13.049 192.168.100.9' 00:22:13.049 11:49:42 -- nvmf/common.sh@446 -- # tail -n +2 00:22:13.049 11:49:42 -- nvmf/common.sh@446 -- # head -n 1 00:22:13.049 11:49:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:13.049 11:49:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:13.049 11:49:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:13.049 11:49:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:13.049 11:49:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:13.049 11:49:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:13.049 11:49:42 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:13.049 11:49:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.049 11:49:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.049 11:49:42 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 11:49:42 -- nvmf/common.sh@469 -- # nvmfpid=3810194 00:22:13.049 11:49:42 -- nvmf/common.sh@470 -- # waitforlisten 3810194 00:22:13.049 11:49:42 -- common/autotest_common.sh@829 -- # '[' -z 3810194 ']' 00:22:13.049 11:49:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.049 11:49:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.049 11:49:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.049 11:49:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.049 11:49:42 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 11:49:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.049 [2024-12-03 11:49:42.393230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:13.049 [2024-12-03 11:49:42.393286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.049 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.049 [2024-12-03 11:49:42.462846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.049 [2024-12-03 11:49:42.536868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.049 [2024-12-03 11:49:42.536973] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.049 [2024-12-03 11:49:42.536983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.049 [2024-12-03 11:49:42.536991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.049 [2024-12-03 11:49:42.537037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.049 [2024-12-03 11:49:42.537138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.049 [2024-12-03 11:49:42.537158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.049 [2024-12-03 11:49:42.537160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.049 11:49:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.049 11:49:43 -- common/autotest_common.sh@862 -- # return 0 00:22:13.049 11:49:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:13.049 11:49:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 11:49:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 Malloc0 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 Delay0 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 [2024-12-03 11:49:43.329364] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x988e40/0x7f71c0) succeed. 00:22:13.049 [2024-12-03 11:49:43.339351] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x98a250/0x877200) succeed. 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:13.049 11:49:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.049 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.049 [2024-12-03 11:49:43.482382] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:13.049 11:49:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.049 11:49:43 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:13.986 11:49:44 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:13.986 11:49:44 -- common/autotest_common.sh@1187 -- # local i=0 00:22:13.986 11:49:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.986 11:49:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:13.986 11:49:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:15.890 11:49:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:15.890 11:49:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:15.890 11:49:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:15.890 11:49:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:15.890 11:49:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.890 11:49:46 -- common/autotest_common.sh@1197 -- # return 0 00:22:15.890 11:49:46 -- target/initiator_timeout.sh@35 -- # fio_pid=3810945 00:22:15.890 11:49:46 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:15.890 11:49:46 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:16.175 [global] 00:22:16.175 thread=1 00:22:16.175 invalidate=1 00:22:16.175 rw=write 00:22:16.175 time_based=1 00:22:16.175 runtime=60 00:22:16.175 ioengine=libaio 00:22:16.175 direct=1 00:22:16.175 bs=4096 00:22:16.175 iodepth=1 00:22:16.175 norandommap=0 00:22:16.175 numjobs=1 00:22:16.175 00:22:16.175 verify_dump=1 00:22:16.175 verify_backlog=512 00:22:16.175 verify_state_save=0 00:22:16.175 do_verify=1 00:22:16.175 verify=crc32c-intel 00:22:16.175 [job0] 00:22:16.175 filename=/dev/nvme0n1 00:22:16.175 Could not set queue depth (nvme0n1) 00:22:16.435 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:16.435 fio-3.35 00:22:16.435 Starting 1 thread 00:22:18.960 11:49:49 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:18.960 11:49:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.960 11:49:49 -- common/autotest_common.sh@10 -- # set +x 00:22:18.960 true 00:22:18.960 11:49:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.960 11:49:49 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:18.960 11:49:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.960 11:49:49 -- common/autotest_common.sh@10 -- # set +x 00:22:18.960 true 00:22:18.960 11:49:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.960 11:49:49 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:18.960 11:49:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.960 11:49:49 -- common/autotest_common.sh@10 -- # set +x 00:22:18.960 true 00:22:18.960 11:49:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.960 11:49:49 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:18.960 11:49:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.960 11:49:49 -- common/autotest_common.sh@10 -- # set +x 00:22:18.960 true 00:22:18.960 11:49:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.960 11:49:49 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:22.238 11:49:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.238 11:49:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.238 true 00:22:22.238 11:49:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:22.238 11:49:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.238 11:49:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.238 true 00:22:22.238 11:49:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:22.238 11:49:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.238 11:49:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.238 true 00:22:22.238 11:49:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:22.238 11:49:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.238 11:49:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.238 true 00:22:22.238 11:49:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:22.238 11:49:52 -- target/initiator_timeout.sh@54 -- # wait 3810945 00:23:18.431 00:23:18.431 job0: (groupid=0, jobs=1): err= 0: pid=3811081: Tue Dec 3 11:50:46 2024 00:23:18.431 read: IOPS=1236, BW=4946KiB/s (5064kB/s)(290MiB/60000msec) 00:23:18.431 slat (usec): min=5, max=15880, avg= 9.71, stdev=81.92 00:23:18.431 clat (usec): min=48, max=42628k, avg=679.66, stdev=156509.52 00:23:18.431 lat (usec): min=88, max=42628k, avg=689.37, stdev=156509.54 00:23:18.431 clat percentiles (usec): 00:23:18.431 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:23:18.431 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:23:18.431 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 117], 00:23:18.431 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 135], 00:23:18.431 | 99.99th=[ 215] 00:23:18.431 write: IOPS=1237, BW=4949KiB/s (5068kB/s)(290MiB/60000msec); 0 zone resets 00:23:18.431 slat (usec): min=3, max=1042, avg=11.96, stdev= 4.47 00:23:18.431 clat (usec): min=2, max=297, avg=102.13, stdev= 6.97 00:23:18.431 lat (usec): min=85, max=1081, avg=114.09, stdev= 8.31 00:23:18.431 clat percentiles (usec): 00:23:18.431 | 1.00th=[ 88], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 97], 00:23:18.431 | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:23:18.431 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 115], 00:23:18.431 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 135], 00:23:18.431 | 99.99th=[ 180] 00:23:18.431 bw ( KiB/s): min= 4096, max=19481, per=100.00%, avg=16574.89, stdev=2647.33, samples=35 00:23:18.431 iops : min= 1024, max= 4870, avg=4143.71, stdev=661.82, samples=35 00:23:18.431 lat (usec) : 4=0.01%, 50=0.01%, 100=30.91%, 250=69.08%, 500=0.01% 00:23:18.431 lat (usec) : 750=0.01% 00:23:18.431 lat (msec) : >=2000=0.01% 00:23:18.431 cpu : usr=1.83%, sys=3.12%, ctx=148433, majf=0, minf=105 00:23:18.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.431 issued rwts: total=74183,74240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:18.431 00:23:18.431 Run status group 0 (all jobs): 00:23:18.431 READ: bw=4946KiB/s (5064kB/s), 4946KiB/s-4946KiB/s (5064kB/s-5064kB/s), io=290MiB (304MB), run=60000-60000msec 00:23:18.431 WRITE: bw=4949KiB/s (5068kB/s), 4949KiB/s-4949KiB/s (5068kB/s-5068kB/s), io=290MiB (304MB), run=60000-60000msec 00:23:18.431 00:23:18.431 Disk stats (read/write): 00:23:18.431 nvme0n1: ios=74041/73829, merge=0/0, ticks=7159/6825, in_queue=13984, util=99.82% 00:23:18.431 11:50:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:18.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:18.431 11:50:47 -- common/autotest_common.sh@1208 -- # local i=0 00:23:18.431 11:50:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:18.431 11:50:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:18.431 11:50:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:18.431 11:50:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:18.431 11:50:47 -- common/autotest_common.sh@1220 -- # return 0 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:18.431 nvmf hotplug test: fio successful as expected 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.431 11:50:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.431 11:50:47 -- common/autotest_common.sh@10 -- # set +x 00:23:18.431 11:50:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:18.431 11:50:47 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:18.431 11:50:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:18.431 11:50:47 -- nvmf/common.sh@116 -- # sync 00:23:18.431 11:50:48 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:18.431 11:50:48 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:18.431 11:50:48 -- nvmf/common.sh@119 -- # set +e 00:23:18.431 11:50:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:18.431 11:50:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:18.431 rmmod nvme_rdma 00:23:18.431 rmmod nvme_fabrics 00:23:18.431 11:50:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:18.432 11:50:48 -- nvmf/common.sh@123 -- # set -e 00:23:18.432 11:50:48 -- nvmf/common.sh@124 -- # return 0 00:23:18.432 11:50:48 -- nvmf/common.sh@477 -- # '[' -n 3810194 ']' 00:23:18.432 11:50:48 -- nvmf/common.sh@478 -- # killprocess 3810194 00:23:18.432 11:50:48 -- common/autotest_common.sh@936 -- # '[' -z 3810194 ']' 00:23:18.432 11:50:48 -- common/autotest_common.sh@940 -- # kill -0 3810194 00:23:18.432 11:50:48 -- common/autotest_common.sh@941 -- # uname 00:23:18.432 11:50:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.432 11:50:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3810194 00:23:18.432 11:50:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:18.432 11:50:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:18.432 11:50:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3810194' 00:23:18.432 killing process with pid 3810194 00:23:18.432 11:50:48 -- common/autotest_common.sh@955 -- # kill 3810194 00:23:18.432 11:50:48 -- common/autotest_common.sh@960 -- # wait 3810194 00:23:18.432 11:50:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:18.432 11:50:48 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:18.432 00:23:18.432 real 1m13.103s 00:23:18.432 user 4m33.745s 00:23:18.432 sys 0m8.000s 00:23:18.432 11:50:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:18.432 11:50:48 -- common/autotest_common.sh@10 -- # set +x 00:23:18.432 ************************************ 00:23:18.432 END TEST nvmf_initiator_timeout 00:23:18.432 ************************************ 00:23:18.432 11:50:48 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:18.432 11:50:48 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:18.432 11:50:48 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:18.432 11:50:48 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:18.432 11:50:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:18.432 11:50:48 -- common/autotest_common.sh@10 -- # set +x 00:23:18.432 ************************************ 00:23:18.432 START TEST nvmf_shutdown 00:23:18.432 ************************************ 00:23:18.432 11:50:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:18.432 * Looking for test storage... 00:23:18.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:18.432 11:50:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:18.432 11:50:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:18.432 11:50:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:18.432 11:50:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:18.432 11:50:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:18.432 11:50:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:18.432 11:50:48 -- scripts/common.sh@335 -- # IFS=.-: 00:23:18.432 11:50:48 -- scripts/common.sh@335 -- # read -ra ver1 00:23:18.432 11:50:48 -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.432 11:50:48 -- scripts/common.sh@336 -- # read -ra ver2 00:23:18.432 11:50:48 -- scripts/common.sh@337 -- # local 'op=<' 00:23:18.432 11:50:48 -- scripts/common.sh@339 -- # ver1_l=2 00:23:18.432 11:50:48 -- scripts/common.sh@340 -- # ver2_l=1 00:23:18.432 11:50:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:18.432 11:50:48 -- scripts/common.sh@343 -- # case "$op" in 00:23:18.432 11:50:48 -- scripts/common.sh@344 -- # : 1 00:23:18.432 11:50:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:18.432 11:50:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.432 11:50:48 -- scripts/common.sh@364 -- # decimal 1 00:23:18.432 11:50:48 -- scripts/common.sh@352 -- # local d=1 00:23:18.432 11:50:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.432 11:50:48 -- scripts/common.sh@354 -- # echo 1 00:23:18.432 11:50:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:18.432 11:50:48 -- scripts/common.sh@365 -- # decimal 2 00:23:18.432 11:50:48 -- scripts/common.sh@352 -- # local d=2 00:23:18.432 11:50:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.432 11:50:48 -- scripts/common.sh@354 -- # echo 2 00:23:18.432 11:50:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:18.432 11:50:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:18.432 11:50:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:18.432 11:50:48 -- scripts/common.sh@367 -- # return 0 00:23:18.432 11:50:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.432 --rc genhtml_branch_coverage=1 00:23:18.432 --rc genhtml_function_coverage=1 00:23:18.432 --rc genhtml_legend=1 00:23:18.432 --rc geninfo_all_blocks=1 00:23:18.432 --rc geninfo_unexecuted_blocks=1 00:23:18.432 00:23:18.432 ' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.432 --rc genhtml_branch_coverage=1 00:23:18.432 --rc genhtml_function_coverage=1 00:23:18.432 --rc genhtml_legend=1 00:23:18.432 --rc geninfo_all_blocks=1 00:23:18.432 --rc geninfo_unexecuted_blocks=1 00:23:18.432 00:23:18.432 ' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.432 --rc genhtml_branch_coverage=1 00:23:18.432 --rc genhtml_function_coverage=1 00:23:18.432 --rc genhtml_legend=1 00:23:18.432 --rc geninfo_all_blocks=1 00:23:18.432 --rc geninfo_unexecuted_blocks=1 00:23:18.432 00:23:18.432 ' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.432 --rc genhtml_branch_coverage=1 00:23:18.432 --rc genhtml_function_coverage=1 00:23:18.432 --rc genhtml_legend=1 00:23:18.432 --rc geninfo_all_blocks=1 00:23:18.432 --rc geninfo_unexecuted_blocks=1 00:23:18.432 00:23:18.432 ' 00:23:18.432 11:50:48 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.432 11:50:48 -- nvmf/common.sh@7 -- # uname -s 00:23:18.432 11:50:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.432 11:50:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.432 11:50:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.432 11:50:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.432 11:50:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.432 11:50:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.432 11:50:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.432 11:50:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.432 11:50:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.432 11:50:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.432 11:50:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:18.432 11:50:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:18.432 11:50:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.432 11:50:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.432 11:50:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.432 11:50:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:18.432 11:50:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.432 11:50:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.432 11:50:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.432 11:50:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.432 11:50:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.432 11:50:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.432 11:50:48 -- paths/export.sh@5 -- # export PATH 00:23:18.432 11:50:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.432 11:50:48 -- nvmf/common.sh@46 -- # : 0 00:23:18.432 11:50:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:18.432 11:50:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:18.432 11:50:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:18.432 11:50:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.432 11:50:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.432 11:50:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:18.432 11:50:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:18.432 11:50:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:18.432 11:50:48 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.432 11:50:48 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.432 11:50:48 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:18.432 11:50:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:18.432 11:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:18.432 11:50:48 -- common/autotest_common.sh@10 -- # set +x 00:23:18.432 ************************************ 00:23:18.432 START TEST nvmf_shutdown_tc1 00:23:18.432 ************************************ 00:23:18.433 11:50:48 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:18.433 11:50:48 -- target/shutdown.sh@74 -- # starttarget 00:23:18.433 11:50:48 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:18.433 11:50:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:18.433 11:50:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.433 11:50:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:18.433 11:50:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:18.433 11:50:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:18.433 11:50:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.433 11:50:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.433 11:50:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.433 11:50:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:18.433 11:50:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:18.433 11:50:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:18.433 11:50:48 -- common/autotest_common.sh@10 -- # set +x 00:23:24.986 11:50:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:24.986 11:50:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:24.986 11:50:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:24.986 11:50:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:24.986 11:50:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:24.986 11:50:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:24.986 11:50:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:24.986 11:50:55 -- nvmf/common.sh@294 -- # net_devs=() 00:23:24.986 11:50:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:24.986 11:50:55 -- nvmf/common.sh@295 -- # e810=() 00:23:24.986 11:50:55 -- nvmf/common.sh@295 -- # local -ga e810 00:23:24.986 11:50:55 -- nvmf/common.sh@296 -- # x722=() 00:23:24.986 11:50:55 -- nvmf/common.sh@296 -- # local -ga x722 00:23:24.986 11:50:55 -- nvmf/common.sh@297 -- # mlx=() 00:23:24.986 11:50:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:24.986 11:50:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.986 11:50:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:24.986 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:24.986 11:50:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:24.986 11:50:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:24.986 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:24.986 11:50:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:24.986 11:50:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.986 11:50:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.986 11:50:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:24.986 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:24.986 11:50:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.986 11:50:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.986 11:50:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:24.986 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:24.986 11:50:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.986 11:50:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:24.986 11:50:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:24.986 11:50:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:24.986 11:50:55 -- nvmf/common.sh@57 -- # uname 00:23:24.986 11:50:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:24.986 11:50:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:24.986 11:50:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:24.986 11:50:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:24.986 11:50:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:24.986 11:50:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:24.986 11:50:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:24.986 11:50:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:24.986 11:50:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:24.986 11:50:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:24.986 11:50:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:24.986 11:50:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:24.986 11:50:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:24.986 11:50:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:24.986 11:50:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:24.986 11:50:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:24.986 11:50:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:24.986 11:50:55 -- nvmf/common.sh@104 -- # continue 2 00:23:24.986 11:50:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:24.986 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:24.986 11:50:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:24.986 11:50:55 -- nvmf/common.sh@104 -- # continue 2 00:23:24.986 11:50:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:24.986 11:50:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:24.986 11:50:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:24.986 11:50:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:24.986 11:50:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:24.986 11:50:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:24.986 11:50:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:24.986 11:50:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:24.987 11:50:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:24.987 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:24.987 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:24.987 altname enp217s0f0np0 00:23:24.987 altname ens818f0np0 00:23:24.987 inet 192.168.100.8/24 scope global mlx_0_0 00:23:24.987 valid_lft forever preferred_lft forever 00:23:24.987 11:50:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:24.987 11:50:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:24.987 11:50:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:24.987 11:50:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:24.987 11:50:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:24.987 11:50:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:24.987 11:50:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:24.987 11:50:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:24.987 11:50:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:24.987 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:24.987 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:24.987 altname enp217s0f1np1 00:23:24.987 altname ens818f1np1 00:23:24.987 inet 192.168.100.9/24 scope global mlx_0_1 00:23:24.987 valid_lft forever preferred_lft forever 00:23:24.987 11:50:55 -- nvmf/common.sh@410 -- # return 0 00:23:24.987 11:50:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:24.987 11:50:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:24.987 11:50:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:24.987 11:50:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:24.987 11:50:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:24.987 11:50:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:24.987 11:50:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:24.987 11:50:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:24.987 11:50:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.268 11:50:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:25.268 11:50:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.268 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.268 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.268 11:50:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:25.268 11:50:55 -- nvmf/common.sh@104 -- # continue 2 00:23:25.268 11:50:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:25.268 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.268 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.268 11:50:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.268 11:50:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.268 11:50:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:25.268 11:50:55 -- nvmf/common.sh@104 -- # continue 2 00:23:25.268 11:50:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.268 11:50:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:25.268 11:50:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:25.268 11:50:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:25.269 11:50:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.269 11:50:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.269 11:50:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:25.269 11:50:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:25.269 11:50:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:25.269 11:50:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:25.269 11:50:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:25.269 11:50:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:25.269 11:50:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:25.269 192.168.100.9' 00:23:25.269 11:50:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:25.269 192.168.100.9' 00:23:25.269 11:50:55 -- nvmf/common.sh@445 -- # head -n 1 00:23:25.269 11:50:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:25.269 11:50:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:25.269 192.168.100.9' 00:23:25.269 11:50:55 -- nvmf/common.sh@446 -- # tail -n +2 00:23:25.269 11:50:55 -- nvmf/common.sh@446 -- # head -n 1 00:23:25.269 11:50:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:25.269 11:50:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:25.269 11:50:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:25.269 11:50:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:25.269 11:50:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:25.269 11:50:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:25.269 11:50:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.269 11:50:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.269 11:50:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.269 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.269 11:50:55 -- nvmf/common.sh@469 -- # nvmfpid=3824861 00:23:25.269 11:50:55 -- nvmf/common.sh@470 -- # waitforlisten 3824861 00:23:25.269 11:50:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.269 11:50:55 -- common/autotest_common.sh@829 -- # '[' -z 3824861 ']' 00:23:25.269 11:50:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.269 11:50:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.269 11:50:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.269 11:50:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.269 11:50:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.269 [2024-12-03 11:50:55.736600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:25.269 [2024-12-03 11:50:55.736647] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.269 [2024-12-03 11:50:55.805516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.543 [2024-12-03 11:50:55.878674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.543 [2024-12-03 11:50:55.878782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.543 [2024-12-03 11:50:55.878791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.543 [2024-12-03 11:50:55.878800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.543 [2024-12-03 11:50:55.878903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.543 [2024-12-03 11:50:55.878985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.543 [2024-12-03 11:50:55.879095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.543 [2024-12-03 11:50:55.879096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.125 11:50:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.125 11:50:56 -- common/autotest_common.sh@862 -- # return 0 00:23:26.125 11:50:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:26.125 11:50:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.125 11:50:56 -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 11:50:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.125 11:50:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.125 11:50:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.125 11:50:56 -- common/autotest_common.sh@10 -- # set +x 00:23:26.125 [2024-12-03 11:50:56.633831] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfe6380/0xfea870) succeed. 00:23:26.125 [2024-12-03 11:50:56.643032] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfe7970/0x102bf10) succeed. 00:23:26.383 11:50:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.383 11:50:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:26.383 11:50:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:26.383 11:50:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.383 11:50:56 -- common/autotest_common.sh@10 -- # set +x 00:23:26.383 11:50:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.383 11:50:56 -- target/shutdown.sh@28 -- # cat 00:23:26.383 11:50:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:26.383 11:50:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.383 11:50:56 -- common/autotest_common.sh@10 -- # set +x 00:23:26.383 Malloc1 00:23:26.383 [2024-12-03 11:50:56.868846] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:26.383 Malloc2 00:23:26.383 Malloc3 00:23:26.383 Malloc4 00:23:26.640 Malloc5 00:23:26.640 Malloc6 00:23:26.640 Malloc7 00:23:26.640 Malloc8 00:23:26.640 Malloc9 00:23:26.640 Malloc10 00:23:26.898 11:50:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 11:50:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:26.899 11:50:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.899 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:23:26.899 11:50:57 -- target/shutdown.sh@78 -- # perfpid=3825182 00:23:26.899 11:50:57 -- target/shutdown.sh@79 -- # waitforlisten 3825182 /var/tmp/bdevperf.sock 00:23:26.899 11:50:57 -- common/autotest_common.sh@829 -- # '[' -z 3825182 ']' 00:23:26.899 11:50:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.899 11:50:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.899 11:50:57 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:26.899 11:50:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.899 11:50:57 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.899 11:50:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.899 11:50:57 -- common/autotest_common.sh@10 -- # set +x 00:23:26.899 11:50:57 -- nvmf/common.sh@520 -- # config=() 00:23:26.899 11:50:57 -- nvmf/common.sh@520 -- # local subsystem config 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 [2024-12-03 11:50:57.353505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:26.899 [2024-12-03 11:50:57.353557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.899 11:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:26.899 { 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme$subsystem", 00:23:26.899 "trtype": "$TEST_TRANSPORT", 00:23:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.899 "adrfam": "ipv4", 00:23:26.899 "trsvcid": "$NVMF_PORT", 00:23:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.899 "hdgst": ${hdgst:-false}, 00:23:26.899 "ddgst": ${ddgst:-false} 00:23:26.899 }, 00:23:26.899 "method": "bdev_nvme_attach_controller" 00:23:26.899 } 00:23:26.899 EOF 00:23:26.899 )") 00:23:26.899 11:50:57 -- nvmf/common.sh@542 -- # cat 00:23:26.899 11:50:57 -- nvmf/common.sh@544 -- # jq . 00:23:26.899 11:50:57 -- nvmf/common.sh@545 -- # IFS=, 00:23:26.899 11:50:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:26.899 "params": { 00:23:26.899 "name": "Nvme1", 00:23:26.899 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme2", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme3", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme4", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme5", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme6", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme7", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme8", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme9", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 },{ 00:23:26.900 "params": { 00:23:26.900 "name": "Nvme10", 00:23:26.900 "trtype": "rdma", 00:23:26.900 "traddr": "192.168.100.8", 00:23:26.900 "adrfam": "ipv4", 00:23:26.900 "trsvcid": "4420", 00:23:26.900 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.900 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.900 "hdgst": false, 00:23:26.900 "ddgst": false 00:23:26.900 }, 00:23:26.900 "method": "bdev_nvme_attach_controller" 00:23:26.900 }' 00:23:26.900 [2024-12-03 11:50:57.426807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.900 [2024-12-03 11:50:57.496250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.285 11:50:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.285 11:50:58 -- common/autotest_common.sh@862 -- # return 0 00:23:28.285 11:50:58 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.285 11:50:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.285 11:50:58 -- common/autotest_common.sh@10 -- # set +x 00:23:28.285 11:50:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.285 11:50:58 -- target/shutdown.sh@83 -- # kill -9 3825182 00:23:28.285 11:50:58 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:28.285 11:50:58 -- target/shutdown.sh@87 -- # sleep 1 00:23:29.654 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3825182 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:29.654 11:50:59 -- target/shutdown.sh@88 -- # kill -0 3824861 00:23:29.654 11:50:59 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:29.654 11:50:59 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.654 11:50:59 -- nvmf/common.sh@520 -- # config=() 00:23:29.654 11:50:59 -- nvmf/common.sh@520 -- # local subsystem config 00:23:29.654 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.654 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.654 { 00:23:29.654 "params": { 00:23:29.654 "name": "Nvme$subsystem", 00:23:29.654 "trtype": "$TEST_TRANSPORT", 00:23:29.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.654 "adrfam": "ipv4", 00:23:29.654 "trsvcid": "$NVMF_PORT", 00:23:29.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.654 "hdgst": ${hdgst:-false}, 00:23:29.654 "ddgst": ${ddgst:-false} 00:23:29.654 }, 00:23:29.654 "method": "bdev_nvme_attach_controller" 00:23:29.654 } 00:23:29.654 EOF 00:23:29.654 )") 00:23:29.654 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.654 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.654 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.654 { 00:23:29.654 "params": { 00:23:29.654 "name": "Nvme$subsystem", 00:23:29.654 "trtype": "$TEST_TRANSPORT", 00:23:29.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.654 "adrfam": "ipv4", 00:23:29.654 "trsvcid": "$NVMF_PORT", 00:23:29.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.654 "hdgst": ${hdgst:-false}, 00:23:29.654 "ddgst": ${ddgst:-false} 00:23:29.654 }, 00:23:29.654 "method": "bdev_nvme_attach_controller" 00:23:29.654 } 00:23:29.654 EOF 00:23:29.654 )") 00:23:29.654 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 [2024-12-03 11:50:59.920986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:29.655 [2024-12-03 11:50:59.921038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825656 ] 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 11:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:29.655 { 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme$subsystem", 00:23:29.655 "trtype": "$TEST_TRANSPORT", 00:23:29.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "$NVMF_PORT", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.655 "hdgst": ${hdgst:-false}, 00:23:29.655 "ddgst": ${ddgst:-false} 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 } 00:23:29.655 EOF 00:23:29.655 )") 00:23:29.655 11:50:59 -- nvmf/common.sh@542 -- # cat 00:23:29.655 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.655 11:50:59 -- nvmf/common.sh@544 -- # jq . 00:23:29.655 11:50:59 -- nvmf/common.sh@545 -- # IFS=, 00:23:29.655 11:50:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme1", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme2", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme3", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme4", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme5", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.655 "params": { 00:23:29.655 "name": "Nvme6", 00:23:29.655 "trtype": "rdma", 00:23:29.655 "traddr": "192.168.100.8", 00:23:29.655 "adrfam": "ipv4", 00:23:29.655 "trsvcid": "4420", 00:23:29.655 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.655 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.655 "hdgst": false, 00:23:29.655 "ddgst": false 00:23:29.655 }, 00:23:29.655 "method": "bdev_nvme_attach_controller" 00:23:29.655 },{ 00:23:29.656 "params": { 00:23:29.656 "name": "Nvme7", 00:23:29.656 "trtype": "rdma", 00:23:29.656 "traddr": "192.168.100.8", 00:23:29.656 "adrfam": "ipv4", 00:23:29.656 "trsvcid": "4420", 00:23:29.656 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.656 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.656 "hdgst": false, 00:23:29.656 "ddgst": false 00:23:29.656 }, 00:23:29.656 "method": "bdev_nvme_attach_controller" 00:23:29.656 },{ 00:23:29.656 "params": { 00:23:29.656 "name": "Nvme8", 00:23:29.656 "trtype": "rdma", 00:23:29.656 "traddr": "192.168.100.8", 00:23:29.656 "adrfam": "ipv4", 00:23:29.656 "trsvcid": "4420", 00:23:29.656 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.656 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.656 "hdgst": false, 00:23:29.656 "ddgst": false 00:23:29.656 }, 00:23:29.656 "method": "bdev_nvme_attach_controller" 00:23:29.656 },{ 00:23:29.656 "params": { 00:23:29.656 "name": "Nvme9", 00:23:29.656 "trtype": "rdma", 00:23:29.656 "traddr": "192.168.100.8", 00:23:29.656 "adrfam": "ipv4", 00:23:29.656 "trsvcid": "4420", 00:23:29.656 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.656 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.656 "hdgst": false, 00:23:29.656 "ddgst": false 00:23:29.656 }, 00:23:29.656 "method": "bdev_nvme_attach_controller" 00:23:29.656 },{ 00:23:29.656 "params": { 00:23:29.656 "name": "Nvme10", 00:23:29.656 "trtype": "rdma", 00:23:29.656 "traddr": "192.168.100.8", 00:23:29.656 "adrfam": "ipv4", 00:23:29.656 "trsvcid": "4420", 00:23:29.656 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.656 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.656 "hdgst": false, 00:23:29.656 "ddgst": false 00:23:29.656 }, 00:23:29.656 "method": "bdev_nvme_attach_controller" 00:23:29.656 }' 00:23:29.656 [2024-12-03 11:50:59.993691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.656 [2024-12-03 11:51:00.073942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.585 Running I/O for 1 seconds... 00:23:31.517 00:23:31.517 Latency(us) 00:23:31.517 [2024-12-03T10:51:02.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.517 [2024-12-03T10:51:02.131Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.517 Verification LBA range: start 0x0 length 0x400 00:23:31.517 Nvme1n1 : 1.08 722.73 45.17 0.00 0.00 87502.36 7444.89 104438.17 00:23:31.517 [2024-12-03T10:51:02.131Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.517 Verification LBA range: start 0x0 length 0x400 00:23:31.517 Nvme2n1 : 1.08 761.77 47.61 0.00 0.00 82426.18 7654.60 74239.18 00:23:31.517 [2024-12-03T10:51:02.131Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.517 Verification LBA range: start 0x0 length 0x400 00:23:31.517 Nvme3n1 : 1.08 761.07 47.57 0.00 0.00 82010.11 7811.89 73400.32 00:23:31.517 [2024-12-03T10:51:02.131Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.517 Verification LBA range: start 0x0 length 0x400 00:23:31.517 Nvme4n1 : 1.08 760.37 47.52 0.00 0.00 81579.94 8021.61 72142.03 00:23:31.517 [2024-12-03T10:51:02.131Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.517 Verification LBA range: start 0x0 length 0x400 00:23:31.517 Nvme5n1 : 1.09 675.96 42.25 0.00 0.00 91118.11 8126.46 138412.03 00:23:31.518 [2024-12-03T10:51:02.132Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.518 Verification LBA range: start 0x0 length 0x400 00:23:31.518 Nvme6n1 : 1.09 675.43 42.21 0.00 0.00 90639.25 8126.46 137573.17 00:23:31.518 [2024-12-03T10:51:02.132Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.518 Verification LBA range: start 0x0 length 0x400 00:23:31.518 Nvme7n1 : 1.09 758.48 47.40 0.00 0.00 80296.36 8336.18 70044.88 00:23:31.518 [2024-12-03T10:51:02.132Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.518 Verification LBA range: start 0x0 length 0x400 00:23:31.518 Nvme8n1 : 1.09 757.79 47.36 0.00 0.00 79866.95 8493.47 71722.60 00:23:31.518 [2024-12-03T10:51:02.132Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.518 Verification LBA range: start 0x0 length 0x400 00:23:31.518 Nvme9n1 : 1.09 673.67 42.10 0.00 0.00 89204.32 8598.32 136734.31 00:23:31.518 [2024-12-03T10:51:02.132Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.518 Verification LBA range: start 0x0 length 0x400 00:23:31.518 Nvme10n1 : 1.09 673.13 42.07 0.00 0.00 88643.17 7759.46 135056.59 00:23:31.518 [2024-12-03T10:51:02.132Z] =================================================================================================================== 00:23:31.518 [2024-12-03T10:51:02.132Z] Total : 7220.39 451.27 0.00 0.00 85104.54 7444.89 138412.03 00:23:31.776 11:51:02 -- target/shutdown.sh@93 -- # stoptarget 00:23:31.776 11:51:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:31.776 11:51:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.776 11:51:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.776 11:51:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:31.776 11:51:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:31.776 11:51:02 -- nvmf/common.sh@116 -- # sync 00:23:31.776 11:51:02 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:31.776 11:51:02 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:31.776 11:51:02 -- nvmf/common.sh@119 -- # set +e 00:23:31.776 11:51:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:31.776 11:51:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:31.776 rmmod nvme_rdma 00:23:31.776 rmmod nvme_fabrics 00:23:32.034 11:51:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:32.034 11:51:02 -- nvmf/common.sh@123 -- # set -e 00:23:32.034 11:51:02 -- nvmf/common.sh@124 -- # return 0 00:23:32.034 11:51:02 -- nvmf/common.sh@477 -- # '[' -n 3824861 ']' 00:23:32.034 11:51:02 -- nvmf/common.sh@478 -- # killprocess 3824861 00:23:32.034 11:51:02 -- common/autotest_common.sh@936 -- # '[' -z 3824861 ']' 00:23:32.034 11:51:02 -- common/autotest_common.sh@940 -- # kill -0 3824861 00:23:32.034 11:51:02 -- common/autotest_common.sh@941 -- # uname 00:23:32.034 11:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:32.034 11:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3824861 00:23:32.034 11:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:32.034 11:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:32.034 11:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3824861' 00:23:32.034 killing process with pid 3824861 00:23:32.034 11:51:02 -- common/autotest_common.sh@955 -- # kill 3824861 00:23:32.034 11:51:02 -- common/autotest_common.sh@960 -- # wait 3824861 00:23:32.601 11:51:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:32.601 11:51:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:32.601 00:23:32.601 real 0m14.252s 00:23:32.601 user 0m33.537s 00:23:32.601 sys 0m6.474s 00:23:32.601 11:51:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:32.601 11:51:02 -- common/autotest_common.sh@10 -- # set +x 00:23:32.601 ************************************ 00:23:32.601 END TEST nvmf_shutdown_tc1 00:23:32.601 ************************************ 00:23:32.601 11:51:02 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:32.601 11:51:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:32.601 11:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:32.601 11:51:02 -- common/autotest_common.sh@10 -- # set +x 00:23:32.601 ************************************ 00:23:32.601 START TEST nvmf_shutdown_tc2 00:23:32.601 ************************************ 00:23:32.601 11:51:03 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:23:32.601 11:51:03 -- target/shutdown.sh@98 -- # starttarget 00:23:32.601 11:51:03 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:32.602 11:51:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:32.602 11:51:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.602 11:51:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:32.602 11:51:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:32.602 11:51:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.602 11:51:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.602 11:51:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.602 11:51:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:32.602 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.602 11:51:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.602 11:51:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:32.602 11:51:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:32.602 11:51:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:32.602 11:51:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:32.602 11:51:03 -- nvmf/common.sh@294 -- # net_devs=() 00:23:32.602 11:51:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@295 -- # e810=() 00:23:32.602 11:51:03 -- nvmf/common.sh@295 -- # local -ga e810 00:23:32.602 11:51:03 -- nvmf/common.sh@296 -- # x722=() 00:23:32.602 11:51:03 -- nvmf/common.sh@296 -- # local -ga x722 00:23:32.602 11:51:03 -- nvmf/common.sh@297 -- # mlx=() 00:23:32.602 11:51:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:32.602 11:51:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.602 11:51:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:32.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:32.602 11:51:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.602 11:51:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:32.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:32.602 11:51:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.602 11:51:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.602 11:51:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.602 11:51:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:32.602 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:32.602 11:51:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.602 11:51:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.602 11:51:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:32.602 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:32.602 11:51:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.602 11:51:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:32.602 11:51:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:32.602 11:51:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:32.602 11:51:03 -- nvmf/common.sh@57 -- # uname 00:23:32.602 11:51:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:32.602 11:51:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:32.602 11:51:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:32.602 11:51:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:32.602 11:51:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:32.602 11:51:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:32.602 11:51:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:32.602 11:51:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:32.602 11:51:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:32.602 11:51:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:32.602 11:51:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:32.602 11:51:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:32.602 11:51:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:32.602 11:51:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.602 11:51:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:32.602 11:51:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:32.602 11:51:03 -- nvmf/common.sh@104 -- # continue 2 00:23:32.602 11:51:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.602 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.602 11:51:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:32.602 11:51:03 -- nvmf/common.sh@104 -- # continue 2 00:23:32.602 11:51:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:32.602 11:51:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.603 11:51:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:32.603 11:51:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:32.603 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.603 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:32.603 altname enp217s0f0np0 00:23:32.603 altname ens818f0np0 00:23:32.603 inet 192.168.100.8/24 scope global mlx_0_0 00:23:32.603 valid_lft forever preferred_lft forever 00:23:32.603 11:51:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:32.603 11:51:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:32.603 11:51:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.603 11:51:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:32.603 11:51:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:32.603 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.603 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:32.603 altname enp217s0f1np1 00:23:32.603 altname ens818f1np1 00:23:32.603 inet 192.168.100.9/24 scope global mlx_0_1 00:23:32.603 valid_lft forever preferred_lft forever 00:23:32.603 11:51:03 -- nvmf/common.sh@410 -- # return 0 00:23:32.603 11:51:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:32.603 11:51:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:32.603 11:51:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:32.603 11:51:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:32.603 11:51:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.603 11:51:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:32.603 11:51:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:32.603 11:51:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.603 11:51:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:32.603 11:51:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.603 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.603 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@104 -- # continue 2 00:23:32.603 11:51:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.603 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.603 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.603 11:51:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.603 11:51:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:32.603 11:51:03 -- nvmf/common.sh@104 -- # continue 2 00:23:32.603 11:51:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:32.603 11:51:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.603 11:51:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.861 11:51:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:32.861 11:51:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:32.861 11:51:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:32.861 11:51:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:32.861 11:51:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.861 11:51:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.861 11:51:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:32.861 192.168.100.9' 00:23:32.861 11:51:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:32.861 192.168.100.9' 00:23:32.861 11:51:03 -- nvmf/common.sh@445 -- # head -n 1 00:23:32.861 11:51:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:32.861 11:51:03 -- nvmf/common.sh@446 -- # head -n 1 00:23:32.861 11:51:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:32.861 192.168.100.9' 00:23:32.861 11:51:03 -- nvmf/common.sh@446 -- # tail -n +2 00:23:32.861 11:51:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:32.861 11:51:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:32.861 11:51:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:32.861 11:51:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:32.861 11:51:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:32.861 11:51:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:32.861 11:51:03 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:32.861 11:51:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:32.861 11:51:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.861 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.861 11:51:03 -- nvmf/common.sh@469 -- # nvmfpid=3826509 00:23:32.861 11:51:03 -- nvmf/common.sh@470 -- # waitforlisten 3826509 00:23:32.861 11:51:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.861 11:51:03 -- common/autotest_common.sh@829 -- # '[' -z 3826509 ']' 00:23:32.861 11:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.861 11:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.861 11:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.861 11:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.861 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.861 [2024-12-03 11:51:03.333271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:32.861 [2024-12-03 11:51:03.333323] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.861 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.861 [2024-12-03 11:51:03.404376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.119 [2024-12-03 11:51:03.479702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:33.119 [2024-12-03 11:51:03.479814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.119 [2024-12-03 11:51:03.479824] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.119 [2024-12-03 11:51:03.479836] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.119 [2024-12-03 11:51:03.479937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.119 [2024-12-03 11:51:03.480019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.119 [2024-12-03 11:51:03.480145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.119 [2024-12-03 11:51:03.480146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:33.684 11:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.684 11:51:04 -- common/autotest_common.sh@862 -- # return 0 00:23:33.684 11:51:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:33.684 11:51:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.684 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.684 11:51:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.684 11:51:04 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:33.684 11:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.684 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.684 [2024-12-03 11:51:04.235565] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1443380/0x1447870) succeed. 00:23:33.684 [2024-12-03 11:51:04.244809] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1444970/0x1488f10) succeed. 00:23:33.942 11:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.942 11:51:04 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:33.942 11:51:04 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:33.942 11:51:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.942 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.942 11:51:04 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.942 11:51:04 -- target/shutdown.sh@28 -- # cat 00:23:33.942 11:51:04 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:33.942 11:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.942 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:33.942 Malloc1 00:23:33.942 [2024-12-03 11:51:04.466912] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:33.942 Malloc2 00:23:33.942 Malloc3 00:23:34.199 Malloc4 00:23:34.199 Malloc5 00:23:34.199 Malloc6 00:23:34.199 Malloc7 00:23:34.199 Malloc8 00:23:34.199 Malloc9 00:23:34.457 Malloc10 00:23:34.457 11:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.457 11:51:04 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:34.457 11:51:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:34.457 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:34.457 11:51:04 -- target/shutdown.sh@102 -- # perfpid=3827094 00:23:34.457 11:51:04 -- target/shutdown.sh@103 -- # waitforlisten 3827094 /var/tmp/bdevperf.sock 00:23:34.457 11:51:04 -- common/autotest_common.sh@829 -- # '[' -z 3827094 ']' 00:23:34.457 11:51:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.457 11:51:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.457 11:51:04 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:34.457 11:51:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.457 11:51:04 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:34.457 11:51:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.457 11:51:04 -- nvmf/common.sh@520 -- # config=() 00:23:34.457 11:51:04 -- common/autotest_common.sh@10 -- # set +x 00:23:34.457 11:51:04 -- nvmf/common.sh@520 -- # local subsystem config 00:23:34.457 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.457 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.457 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 [2024-12-03 11:51:04.955615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:34.458 [2024-12-03 11:51:04.955668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827094 ] 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:34.458 { 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme$subsystem", 00:23:34.458 "trtype": "$TEST_TRANSPORT", 00:23:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "$NVMF_PORT", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.458 "hdgst": ${hdgst:-false}, 00:23:34.458 "ddgst": ${ddgst:-false} 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 } 00:23:34.458 EOF 00:23:34.458 )") 00:23:34.458 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.458 11:51:04 -- nvmf/common.sh@542 -- # cat 00:23:34.458 11:51:04 -- nvmf/common.sh@544 -- # jq . 00:23:34.458 11:51:04 -- nvmf/common.sh@545 -- # IFS=, 00:23:34.458 11:51:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme1", 00:23:34.458 "trtype": "rdma", 00:23:34.458 "traddr": "192.168.100.8", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "4420", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.458 "hdgst": false, 00:23:34.458 "ddgst": false 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 },{ 00:23:34.458 "params": { 00:23:34.458 "name": "Nvme2", 00:23:34.458 "trtype": "rdma", 00:23:34.458 "traddr": "192.168.100.8", 00:23:34.458 "adrfam": "ipv4", 00:23:34.458 "trsvcid": "4420", 00:23:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.458 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:34.458 "hdgst": false, 00:23:34.458 "ddgst": false 00:23:34.458 }, 00:23:34.458 "method": "bdev_nvme_attach_controller" 00:23:34.458 },{ 00:23:34.458 "params": { 00:23:34.459 "name": "Nvme3", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme4", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme5", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme6", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme7", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme8", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme9", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 },{ 00:23:34.459 "params": { 00:23:34.459 "name": "Nvme10", 00:23:34.459 "trtype": "rdma", 00:23:34.459 "traddr": "192.168.100.8", 00:23:34.459 "adrfam": "ipv4", 00:23:34.459 "trsvcid": "4420", 00:23:34.459 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:34.459 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:34.459 "hdgst": false, 00:23:34.459 "ddgst": false 00:23:34.459 }, 00:23:34.459 "method": "bdev_nvme_attach_controller" 00:23:34.459 }' 00:23:34.459 [2024-12-03 11:51:05.027079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.715 [2024-12-03 11:51:05.094506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.644 Running I/O for 10 seconds... 00:23:36.208 11:51:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.209 11:51:06 -- common/autotest_common.sh@862 -- # return 0 00:23:36.209 11:51:06 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:36.209 11:51:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.209 11:51:06 -- common/autotest_common.sh@10 -- # set +x 00:23:36.209 11:51:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.209 11:51:06 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:36.209 11:51:06 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:36.209 11:51:06 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:36.209 11:51:06 -- target/shutdown.sh@57 -- # local ret=1 00:23:36.209 11:51:06 -- target/shutdown.sh@58 -- # local i 00:23:36.209 11:51:06 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:36.209 11:51:06 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:36.209 11:51:06 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:36.209 11:51:06 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:36.209 11:51:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.209 11:51:06 -- common/autotest_common.sh@10 -- # set +x 00:23:36.209 11:51:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.209 11:51:06 -- target/shutdown.sh@60 -- # read_io_count=491 00:23:36.209 11:51:06 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:23:36.209 11:51:06 -- target/shutdown.sh@64 -- # ret=0 00:23:36.209 11:51:06 -- target/shutdown.sh@65 -- # break 00:23:36.209 11:51:06 -- target/shutdown.sh@69 -- # return 0 00:23:36.209 11:51:06 -- target/shutdown.sh@109 -- # killprocess 3827094 00:23:36.209 11:51:06 -- common/autotest_common.sh@936 -- # '[' -z 3827094 ']' 00:23:36.209 11:51:06 -- common/autotest_common.sh@940 -- # kill -0 3827094 00:23:36.209 11:51:06 -- common/autotest_common.sh@941 -- # uname 00:23:36.209 11:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.209 11:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3827094 00:23:36.466 11:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.466 11:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.466 11:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3827094' 00:23:36.466 killing process with pid 3827094 00:23:36.466 11:51:06 -- common/autotest_common.sh@955 -- # kill 3827094 00:23:36.466 11:51:06 -- common/autotest_common.sh@960 -- # wait 3827094 00:23:36.466 Received shutdown signal, test time was about 0.947246 seconds 00:23:36.466 00:23:36.466 Latency(us) 00:23:36.466 [2024-12-03T10:51:07.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.466 Verification LBA range: start 0x0 length 0x400 00:23:36.466 Nvme1n1 : 0.94 726.11 45.38 0.00 0.00 86868.71 7654.60 120795.96 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.466 Verification LBA range: start 0x0 length 0x400 00:23:36.466 Nvme2n1 : 0.94 740.28 46.27 0.00 0.00 84421.95 7864.32 112407.35 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.466 Verification LBA range: start 0x0 length 0x400 00:23:36.466 Nvme3n1 : 0.94 736.34 46.02 0.00 0.00 84262.63 7969.18 73819.75 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.466 Verification LBA range: start 0x0 length 0x400 00:23:36.466 Nvme4n1 : 0.94 735.59 45.97 0.00 0.00 83779.86 8126.46 72561.46 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.466 Verification LBA range: start 0x0 length 0x400 00:23:36.466 Nvme5n1 : 0.94 734.85 45.93 0.00 0.00 83305.95 8231.32 71303.17 00:23:36.466 [2024-12-03T10:51:07.080Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.467 Verification LBA range: start 0x0 length 0x400 00:23:36.467 Nvme6n1 : 0.94 734.12 45.88 0.00 0.00 82823.11 8336.18 70464.31 00:23:36.467 [2024-12-03T10:51:07.081Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.467 Verification LBA range: start 0x0 length 0x400 00:23:36.467 Nvme7n1 : 0.94 733.37 45.84 0.00 0.00 82304.20 8493.47 72142.03 00:23:36.467 [2024-12-03T10:51:07.081Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.467 Verification LBA range: start 0x0 length 0x400 00:23:36.467 Nvme8n1 : 0.94 732.63 45.79 0.00 0.00 81798.40 8598.32 73819.75 00:23:36.467 [2024-12-03T10:51:07.081Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.467 Verification LBA range: start 0x0 length 0x400 00:23:36.467 Nvme9n1 : 0.95 731.90 45.74 0.00 0.00 81294.64 8703.18 75078.04 00:23:36.467 [2024-12-03T10:51:07.081Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.467 Verification LBA range: start 0x0 length 0x400 00:23:36.467 Nvme10n1 : 0.95 510.32 31.89 0.00 0.00 115722.46 7864.32 333866.60 00:23:36.467 [2024-12-03T10:51:07.081Z] =================================================================================================================== 00:23:36.467 [2024-12-03T10:51:07.081Z] Total : 7115.50 444.72 0.00 0.00 85750.62 7654.60 333866.60 00:23:36.747 11:51:07 -- target/shutdown.sh@112 -- # sleep 1 00:23:37.677 11:51:08 -- target/shutdown.sh@113 -- # kill -0 3826509 00:23:37.677 11:51:08 -- target/shutdown.sh@115 -- # stoptarget 00:23:37.677 11:51:08 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:37.677 11:51:08 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:37.677 11:51:08 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:37.677 11:51:08 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:37.677 11:51:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:37.677 11:51:08 -- nvmf/common.sh@116 -- # sync 00:23:37.677 11:51:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:37.678 11:51:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:37.678 11:51:08 -- nvmf/common.sh@119 -- # set +e 00:23:37.678 11:51:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:37.678 11:51:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:37.678 rmmod nvme_rdma 00:23:37.678 rmmod nvme_fabrics 00:23:37.678 11:51:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:37.678 11:51:08 -- nvmf/common.sh@123 -- # set -e 00:23:37.678 11:51:08 -- nvmf/common.sh@124 -- # return 0 00:23:37.678 11:51:08 -- nvmf/common.sh@477 -- # '[' -n 3826509 ']' 00:23:37.678 11:51:08 -- nvmf/common.sh@478 -- # killprocess 3826509 00:23:37.678 11:51:08 -- common/autotest_common.sh@936 -- # '[' -z 3826509 ']' 00:23:37.678 11:51:08 -- common/autotest_common.sh@940 -- # kill -0 3826509 00:23:37.678 11:51:08 -- common/autotest_common.sh@941 -- # uname 00:23:37.678 11:51:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.678 11:51:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3826509 00:23:37.935 11:51:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:37.935 11:51:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:37.935 11:51:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3826509' 00:23:37.935 killing process with pid 3826509 00:23:37.935 11:51:08 -- common/autotest_common.sh@955 -- # kill 3826509 00:23:37.935 11:51:08 -- common/autotest_common.sh@960 -- # wait 3826509 00:23:38.503 11:51:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:38.503 11:51:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:38.503 00:23:38.503 real 0m5.802s 00:23:38.503 user 0m23.410s 00:23:38.503 sys 0m1.232s 00:23:38.503 11:51:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.503 11:51:08 -- common/autotest_common.sh@10 -- # set +x 00:23:38.503 ************************************ 00:23:38.503 END TEST nvmf_shutdown_tc2 00:23:38.503 ************************************ 00:23:38.503 11:51:08 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:38.503 11:51:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:38.503 11:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.503 11:51:08 -- common/autotest_common.sh@10 -- # set +x 00:23:38.503 ************************************ 00:23:38.503 START TEST nvmf_shutdown_tc3 00:23:38.503 ************************************ 00:23:38.503 11:51:08 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:23:38.503 11:51:08 -- target/shutdown.sh@120 -- # starttarget 00:23:38.503 11:51:08 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:38.503 11:51:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:38.503 11:51:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.503 11:51:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:38.503 11:51:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:38.503 11:51:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:38.503 11:51:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.503 11:51:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.503 11:51:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.504 11:51:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:38.504 11:51:08 -- common/autotest_common.sh@10 -- # set +x 00:23:38.504 11:51:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:38.504 11:51:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:38.504 11:51:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:38.504 11:51:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:38.504 11:51:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:38.504 11:51:08 -- nvmf/common.sh@294 -- # net_devs=() 00:23:38.504 11:51:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@295 -- # e810=() 00:23:38.504 11:51:08 -- nvmf/common.sh@295 -- # local -ga e810 00:23:38.504 11:51:08 -- nvmf/common.sh@296 -- # x722=() 00:23:38.504 11:51:08 -- nvmf/common.sh@296 -- # local -ga x722 00:23:38.504 11:51:08 -- nvmf/common.sh@297 -- # mlx=() 00:23:38.504 11:51:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:38.504 11:51:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.504 11:51:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:38.504 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:38.504 11:51:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.504 11:51:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:38.504 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:38.504 11:51:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:38.504 11:51:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.504 11:51:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.504 11:51:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:38.504 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:38.504 11:51:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.504 11:51:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.504 11:51:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:38.504 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:38.504 11:51:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.504 11:51:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:38.504 11:51:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:38.504 11:51:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:38.504 11:51:08 -- nvmf/common.sh@57 -- # uname 00:23:38.504 11:51:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:38.504 11:51:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:38.504 11:51:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:38.504 11:51:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:38.504 11:51:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:38.504 11:51:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:38.504 11:51:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:38.504 11:51:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:38.504 11:51:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:38.504 11:51:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:38.504 11:51:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:38.504 11:51:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:38.504 11:51:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:38.504 11:51:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.504 11:51:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:38.504 11:51:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:38.504 11:51:08 -- nvmf/common.sh@104 -- # continue 2 00:23:38.504 11:51:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.504 11:51:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.504 11:51:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:38.504 11:51:08 -- nvmf/common.sh@104 -- # continue 2 00:23:38.504 11:51:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:38.504 11:51:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:38.504 11:51:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:38.504 11:51:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:38.504 11:51:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:38.504 11:51:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:38.504 11:51:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:38.504 11:51:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:38.504 11:51:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:38.504 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.504 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:38.504 altname enp217s0f0np0 00:23:38.504 altname ens818f0np0 00:23:38.504 inet 192.168.100.8/24 scope global mlx_0_0 00:23:38.504 valid_lft forever preferred_lft forever 00:23:38.504 11:51:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:38.504 11:51:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:38.504 11:51:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:38.504 11:51:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:38.504 11:51:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:38.504 11:51:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:38.504 11:51:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:38.504 11:51:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:38.504 11:51:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:38.504 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:38.504 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:38.504 altname enp217s0f1np1 00:23:38.504 altname ens818f1np1 00:23:38.504 inet 192.168.100.9/24 scope global mlx_0_1 00:23:38.504 valid_lft forever preferred_lft forever 00:23:38.504 11:51:09 -- nvmf/common.sh@410 -- # return 0 00:23:38.504 11:51:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:38.504 11:51:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:38.504 11:51:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:38.504 11:51:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:38.504 11:51:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:38.504 11:51:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:38.504 11:51:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:38.504 11:51:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:38.504 11:51:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:38.504 11:51:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:38.504 11:51:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:38.504 11:51:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.504 11:51:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:38.504 11:51:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:38.504 11:51:09 -- nvmf/common.sh@104 -- # continue 2 00:23:38.504 11:51:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:38.504 11:51:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.505 11:51:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:38.505 11:51:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:38.505 11:51:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:38.505 11:51:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:38.505 11:51:09 -- nvmf/common.sh@104 -- # continue 2 00:23:38.505 11:51:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:38.505 11:51:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:38.505 11:51:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:38.505 11:51:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:38.505 11:51:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:38.505 11:51:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:38.505 11:51:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:38.505 11:51:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:38.505 192.168.100.9' 00:23:38.505 11:51:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:38.505 192.168.100.9' 00:23:38.505 11:51:09 -- nvmf/common.sh@445 -- # head -n 1 00:23:38.505 11:51:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:38.505 11:51:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:38.505 192.168.100.9' 00:23:38.505 11:51:09 -- nvmf/common.sh@446 -- # head -n 1 00:23:38.505 11:51:09 -- nvmf/common.sh@446 -- # tail -n +2 00:23:38.505 11:51:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:38.505 11:51:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:38.505 11:51:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:38.505 11:51:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:38.505 11:51:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:38.505 11:51:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:38.763 11:51:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:38.763 11:51:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:38.763 11:51:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.763 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:23:38.763 11:51:09 -- nvmf/common.sh@469 -- # nvmfpid=3827938 00:23:38.763 11:51:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:38.763 11:51:09 -- nvmf/common.sh@470 -- # waitforlisten 3827938 00:23:38.763 11:51:09 -- common/autotest_common.sh@829 -- # '[' -z 3827938 ']' 00:23:38.763 11:51:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.763 11:51:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.763 11:51:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.763 11:51:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.763 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:23:38.763 [2024-12-03 11:51:09.168295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:38.763 [2024-12-03 11:51:09.168348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.763 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.763 [2024-12-03 11:51:09.238930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.763 [2024-12-03 11:51:09.308376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:38.763 [2024-12-03 11:51:09.308490] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.763 [2024-12-03 11:51:09.308500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.763 [2024-12-03 11:51:09.308509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.764 [2024-12-03 11:51:09.308615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.764 [2024-12-03 11:51:09.308696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.764 [2024-12-03 11:51:09.308789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.764 [2024-12-03 11:51:09.308790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.695 11:51:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.695 11:51:09 -- common/autotest_common.sh@862 -- # return 0 00:23:39.695 11:51:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:39.695 11:51:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.695 11:51:09 -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 11:51:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.695 11:51:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:39.695 11:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.695 11:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 [2024-12-03 11:51:10.069509] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x245c380/0x2460870) succeed. 00:23:39.695 [2024-12-03 11:51:10.078665] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x245d970/0x24a1f10) succeed. 00:23:39.695 11:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.695 11:51:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:39.695 11:51:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:39.695 11:51:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.695 11:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 11:51:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.695 11:51:10 -- target/shutdown.sh@28 -- # cat 00:23:39.695 11:51:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:39.695 11:51:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.695 11:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 Malloc1 00:23:39.695 [2024-12-03 11:51:10.300297] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:39.952 Malloc2 00:23:39.952 Malloc3 00:23:39.952 Malloc4 00:23:39.952 Malloc5 00:23:39.952 Malloc6 00:23:39.952 Malloc7 00:23:40.210 Malloc8 00:23:40.210 Malloc9 00:23:40.210 Malloc10 00:23:40.210 11:51:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 11:51:10 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:40.210 11:51:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.210 11:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 11:51:10 -- target/shutdown.sh@124 -- # perfpid=3828251 00:23:40.210 11:51:10 -- target/shutdown.sh@125 -- # waitforlisten 3828251 /var/tmp/bdevperf.sock 00:23:40.210 11:51:10 -- common/autotest_common.sh@829 -- # '[' -z 3828251 ']' 00:23:40.210 11:51:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.210 11:51:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.210 11:51:10 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:40.210 11:51:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.210 11:51:10 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.210 11:51:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.210 11:51:10 -- nvmf/common.sh@520 -- # config=() 00:23:40.210 11:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 11:51:10 -- nvmf/common.sh@520 -- # local subsystem config 00:23:40.210 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.210 { 00:23:40.210 "params": { 00:23:40.210 "name": "Nvme$subsystem", 00:23:40.210 "trtype": "$TEST_TRANSPORT", 00:23:40.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.210 "adrfam": "ipv4", 00:23:40.210 "trsvcid": "$NVMF_PORT", 00:23:40.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.210 "hdgst": ${hdgst:-false}, 00:23:40.210 "ddgst": ${ddgst:-false} 00:23:40.210 }, 00:23:40.210 "method": "bdev_nvme_attach_controller" 00:23:40.210 } 00:23:40.210 EOF 00:23:40.210 )") 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.210 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.210 { 00:23:40.210 "params": { 00:23:40.210 "name": "Nvme$subsystem", 00:23:40.210 "trtype": "$TEST_TRANSPORT", 00:23:40.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.210 "adrfam": "ipv4", 00:23:40.210 "trsvcid": "$NVMF_PORT", 00:23:40.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.210 "hdgst": ${hdgst:-false}, 00:23:40.210 "ddgst": ${ddgst:-false} 00:23:40.210 }, 00:23:40.210 "method": "bdev_nvme_attach_controller" 00:23:40.210 } 00:23:40.210 EOF 00:23:40.210 )") 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.210 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.210 { 00:23:40.210 "params": { 00:23:40.210 "name": "Nvme$subsystem", 00:23:40.210 "trtype": "$TEST_TRANSPORT", 00:23:40.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.210 "adrfam": "ipv4", 00:23:40.210 "trsvcid": "$NVMF_PORT", 00:23:40.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.210 "hdgst": ${hdgst:-false}, 00:23:40.210 "ddgst": ${ddgst:-false} 00:23:40.210 }, 00:23:40.210 "method": "bdev_nvme_attach_controller" 00:23:40.210 } 00:23:40.210 EOF 00:23:40.210 )") 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.210 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.210 { 00:23:40.210 "params": { 00:23:40.210 "name": "Nvme$subsystem", 00:23:40.210 "trtype": "$TEST_TRANSPORT", 00:23:40.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.210 "adrfam": "ipv4", 00:23:40.210 "trsvcid": "$NVMF_PORT", 00:23:40.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.210 "hdgst": ${hdgst:-false}, 00:23:40.210 "ddgst": ${ddgst:-false} 00:23:40.210 }, 00:23:40.210 "method": "bdev_nvme_attach_controller" 00:23:40.210 } 00:23:40.210 EOF 00:23:40.210 )") 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.210 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.210 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.210 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.211 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.211 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 [2024-12-03 11:51:10.790233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:40.211 [2024-12-03 11:51:10.790285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828251 ] 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.211 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.211 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.211 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.211 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.211 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.211 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.211 11:51:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.211 11:51:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.211 { 00:23:40.211 "params": { 00:23:40.211 "name": "Nvme$subsystem", 00:23:40.211 "trtype": "$TEST_TRANSPORT", 00:23:40.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.211 "adrfam": "ipv4", 00:23:40.211 "trsvcid": "$NVMF_PORT", 00:23:40.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.211 "hdgst": ${hdgst:-false}, 00:23:40.211 "ddgst": ${ddgst:-false} 00:23:40.211 }, 00:23:40.211 "method": "bdev_nvme_attach_controller" 00:23:40.211 } 00:23:40.211 EOF 00:23:40.211 )") 00:23:40.211 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.469 11:51:10 -- nvmf/common.sh@542 -- # cat 00:23:40.469 11:51:10 -- nvmf/common.sh@544 -- # jq . 00:23:40.469 11:51:10 -- nvmf/common.sh@545 -- # IFS=, 00:23:40.469 11:51:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme1", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme2", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme3", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme4", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme5", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme6", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme7", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme8", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme9", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 },{ 00:23:40.469 "params": { 00:23:40.469 "name": "Nvme10", 00:23:40.469 "trtype": "rdma", 00:23:40.469 "traddr": "192.168.100.8", 00:23:40.469 "adrfam": "ipv4", 00:23:40.469 "trsvcid": "4420", 00:23:40.469 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.469 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.469 "hdgst": false, 00:23:40.469 "ddgst": false 00:23:40.469 }, 00:23:40.469 "method": "bdev_nvme_attach_controller" 00:23:40.469 }' 00:23:40.469 [2024-12-03 11:51:10.861961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.469 [2024-12-03 11:51:10.930566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.401 Running I/O for 10 seconds... 00:23:41.965 11:51:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.965 11:51:12 -- common/autotest_common.sh@862 -- # return 0 00:23:41.965 11:51:12 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:41.965 11:51:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.965 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:23:41.965 11:51:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.965 11:51:12 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.965 11:51:12 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:41.965 11:51:12 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:41.965 11:51:12 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:41.965 11:51:12 -- target/shutdown.sh@57 -- # local ret=1 00:23:41.965 11:51:12 -- target/shutdown.sh@58 -- # local i 00:23:41.965 11:51:12 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:41.965 11:51:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:41.965 11:51:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:41.965 11:51:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:41.965 11:51:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.965 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:23:42.223 11:51:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.223 11:51:12 -- target/shutdown.sh@60 -- # read_io_count=491 00:23:42.223 11:51:12 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:23:42.223 11:51:12 -- target/shutdown.sh@64 -- # ret=0 00:23:42.223 11:51:12 -- target/shutdown.sh@65 -- # break 00:23:42.223 11:51:12 -- target/shutdown.sh@69 -- # return 0 00:23:42.223 11:51:12 -- target/shutdown.sh@134 -- # killprocess 3827938 00:23:42.223 11:51:12 -- common/autotest_common.sh@936 -- # '[' -z 3827938 ']' 00:23:42.223 11:51:12 -- common/autotest_common.sh@940 -- # kill -0 3827938 00:23:42.223 11:51:12 -- common/autotest_common.sh@941 -- # uname 00:23:42.223 11:51:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:42.223 11:51:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3827938 00:23:42.223 11:51:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:42.223 11:51:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:42.223 11:51:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3827938' 00:23:42.223 killing process with pid 3827938 00:23:42.223 11:51:12 -- common/autotest_common.sh@955 -- # kill 3827938 00:23:42.223 11:51:12 -- common/autotest_common.sh@960 -- # wait 3827938 00:23:42.789 11:51:13 -- target/shutdown.sh@135 -- # nvmfpid= 00:23:42.789 11:51:13 -- target/shutdown.sh@138 -- # sleep 1 00:23:43.368 [2024-12-03 11:51:13.721570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.721611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.721625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.721634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.721647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.721656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.721665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.721673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.724229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.724247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:43.368 [2024-12-03 11:51:13.724272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.724282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.724292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.724301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.724310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.724318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.724326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.724334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.726752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.726794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:43.368 [2024-12-03 11:51:13.726844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.726858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.726872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.726884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.726898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.726910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.726923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.726935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.729299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.729339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.368 [2024-12-03 11:51:13.729396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.729429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.729462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.729492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.729524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.729555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.729586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.729617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.732159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.732199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:43.368 [2024-12-03 11:51:13.732252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.732285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.732318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.732381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.732410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.732443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.732474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.735458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.735497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:43.368 [2024-12-03 11:51:13.735546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.735581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.735614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.735644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.735676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.735706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.735746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.735776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.737771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.737829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:43.368 [2024-12-03 11:51:13.737879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.737912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.737945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.737975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.738007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.738037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.738069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.738099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.740623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.368 [2024-12-03 11:51:13.740663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:43.368 [2024-12-03 11:51:13.740711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.368 [2024-12-03 11:51:13.740744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.368 [2024-12-03 11:51:13.740778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.740808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.740840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.740870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.740903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.743214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.369 [2024-12-03 11:51:13.743255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:43.369 [2024-12-03 11:51:13.743305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.743338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.743377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.743407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.743439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.743470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.743502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.746181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.369 [2024-12-03 11:51:13.746224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:43.369 [2024-12-03 11:51:13.746273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.746306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.746338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.746368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.746400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.746431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.746463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.369 [2024-12-03 11:51:13.746493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62471 cdw0:0 sqhd:8400 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.748975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.369 [2024-12-03 11:51:13.749016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:43.369 [2024-12-03 11:51:13.751575] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:23:43.369 [2024-12-03 11:51:13.751620] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.369 [2024-12-03 11:51:13.755195] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:23:43.369 [2024-12-03 11:51:13.755246] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.369 [2024-12-03 11:51:13.757064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183b00 00:23:43.369 [2024-12-03 11:51:13.757084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183b00 00:23:43.369 [2024-12-03 11:51:13.757162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183b00 00:23:43.369 [2024-12-03 11:51:13.757422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182a00 00:23:43.369 [2024-12-03 11:51:13.757732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183c00 00:23:43.369 [2024-12-03 11:51:13.757764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183b00 00:23:43.369 [2024-12-03 11:51:13.757795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.369 [2024-12-03 11:51:13.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.757846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.757878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.757912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.757975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.757988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c168000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.758683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x184300 00:23:43.370 [2024-12-03 11:51:13.758696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.761204] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:23:43.370 [2024-12-03 11:51:13.761247] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.370 [2024-12-03 11:51:13.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:23:43.370 [2024-12-03 11:51:13.762850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.762872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x182a00 00:23:43.370 [2024-12-03 11:51:13.762886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.762903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:23:43.370 [2024-12-03 11:51:13.762917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.762935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:23:43.370 [2024-12-03 11:51:13.762948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:23:43.370 [2024-12-03 11:51:13.762979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.762998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:23:43.370 [2024-12-03 11:51:13.763010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.763028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:23:43.370 [2024-12-03 11:51:13.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.370 [2024-12-03 11:51:13.763063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:23:43.370 [2024-12-03 11:51:13.763077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:23:43.371 [2024-12-03 11:51:13.763367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:23:43.371 [2024-12-03 11:51:13.763431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:23:43.371 [2024-12-03 11:51:13.763555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:23:43.371 [2024-12-03 11:51:13.763585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:23:43.371 [2024-12-03 11:51:13.763679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:23:43.371 [2024-12-03 11:51:13.763834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:23:43.371 [2024-12-03 11:51:13.763865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:23:43.371 [2024-12-03 11:51:13.763959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.763978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.763991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.371 [2024-12-03 11:51:13.764210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:23:43.371 [2024-12-03 11:51:13.764223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b24000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a7000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d24b000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.764842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x184300 00:23:43.372 [2024-12-03 11:51:13.764858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768447] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:23:43.372 [2024-12-03 11:51:13.768471] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.372 [2024-12-03 11:51:13.768491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:23:43.372 [2024-12-03 11:51:13.768504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:23:43.372 [2024-12-03 11:51:13.768541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:23:43.372 [2024-12-03 11:51:13.768569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:23:43.372 [2024-12-03 11:51:13.768597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:23:43.372 [2024-12-03 11:51:13.768624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:23:43.372 [2024-12-03 11:51:13.768651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:23:43.372 [2024-12-03 11:51:13.768677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:23:43.372 [2024-12-03 11:51:13.768704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.372 [2024-12-03 11:51:13.768720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:23:43.372 [2024-12-03 11:51:13.768732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:23:43.373 [2024-12-03 11:51:13.768759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.768790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.768817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.768844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.768872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.768899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.768927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.768955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182d00 00:23:43.373 [2024-12-03 11:51:13.768982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.768998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:23:43.373 [2024-12-03 11:51:13.769063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:23:43.373 [2024-12-03 11:51:13.769391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:23:43.373 [2024-12-03 11:51:13.769448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:23:43.373 [2024-12-03 11:51:13.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.373 [2024-12-03 11:51:13.769736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:23:43.373 [2024-12-03 11:51:13.769748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.769979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.770282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47c000 len:0x10000 key:0x184300 00:23:43.374 [2024-12-03 11:51:13.770294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773229] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:23:43.374 [2024-12-03 11:51:13.773277] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.374 [2024-12-03 11:51:13.773322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:23:43.374 [2024-12-03 11:51:13.773356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:23:43.374 [2024-12-03 11:51:13.773557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:23:43.374 [2024-12-03 11:51:13.773589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:23:43.374 [2024-12-03 11:51:13.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:23:43.374 [2024-12-03 11:51:13.773648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:23:43.374 [2024-12-03 11:51:13.773706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183100 00:23:43.374 [2024-12-03 11:51:13.773735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:23:43.374 [2024-12-03 11:51:13.773764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:23:43.374 [2024-12-03 11:51:13.773794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:23:43.374 [2024-12-03 11:51:13.773885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.374 [2024-12-03 11:51:13.773900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.773930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:23:43.375 [2024-12-03 11:51:13.773945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.773962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.773974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.773990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:23:43.375 [2024-12-03 11:51:13.774001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:23:43.375 [2024-12-03 11:51:13.774032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183100 00:23:43.375 [2024-12-03 11:51:13.774088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:23:43.375 [2024-12-03 11:51:13.774125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:23:43.375 [2024-12-03 11:51:13.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:23:43.375 [2024-12-03 11:51:13.774208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:23:43.375 [2024-12-03 11:51:13.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:23:43.375 [2024-12-03 11:51:13.774342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:23:43.375 [2024-12-03 11:51:13.774369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:23:43.375 [2024-12-03 11:51:13.774423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183300 00:23:43.375 [2024-12-03 11:51:13.774452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001379d000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001377c000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001373a000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:23:43.375 [2024-12-03 11:51:13.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.375 [2024-12-03 11:51:13.774922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.774934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.774951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.774964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.774980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.774991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.775242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x184300 00:23:43.376 [2024-12-03 11:51:13.775254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778151] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:23:43.376 [2024-12-03 11:51:13.778169] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.376 [2024-12-03 11:51:13.778187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183100 00:23:43.376 [2024-12-03 11:51:13.778372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183100 00:23:43.376 [2024-12-03 11:51:13.778689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183600 00:23:43.376 [2024-12-03 11:51:13.778744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:23:43.376 [2024-12-03 11:51:13.778772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.376 [2024-12-03 11:51:13.778787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183800 00:23:43.376 [2024-12-03 11:51:13.778800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.778827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.778854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.778881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.778909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.778938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.778965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.778981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.778992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.779019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.779046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.779073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.779099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183800 00:23:43.377 [2024-12-03 11:51:13.779133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.779159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183600 00:23:43.377 [2024-12-03 11:51:13.779187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.377 [2024-12-03 11:51:13.779706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x184300 00:23:43.377 [2024-12-03 11:51:13.779718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.779954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x184300 00:23:43.378 [2024-12-03 11:51:13.779966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.782672] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:23:43.378 [2024-12-03 11:51:13.782716] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.378 [2024-12-03 11:51:13.782759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183800 00:23:43.378 [2024-12-03 11:51:13.782795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.782841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.782874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.782917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.782950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.782994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183900 00:23:43.378 [2024-12-03 11:51:13.783513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183a00 00:23:43.378 [2024-12-03 11:51:13.783568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183800 00:23:43.378 [2024-12-03 11:51:13.783595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183500 00:23:43.378 [2024-12-03 11:51:13.783649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.378 [2024-12-03 11:51:13.783665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183a00 00:23:43.379 [2024-12-03 11:51:13.783677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183500 00:23:43.379 [2024-12-03 11:51:13.783704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183500 00:23:43.379 [2024-12-03 11:51:13.783732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183a00 00:23:43.379 [2024-12-03 11:51:13.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183a00 00:23:43.379 [2024-12-03 11:51:13.783788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183900 00:23:43.379 [2024-12-03 11:51:13.783816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183800 00:23:43.379 [2024-12-03 11:51:13.783843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183500 00:23:43.379 [2024-12-03 11:51:13.783870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183900 00:23:43.379 [2024-12-03 11:51:13.783897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183500 00:23:43.379 [2024-12-03 11:51:13.783923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.783950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.783977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.783993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x184300 00:23:43.379 [2024-12-03 11:51:13.784661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.379 [2024-12-03 11:51:13.784676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x184300 00:23:43.380 [2024-12-03 11:51:13.784688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.784707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x184300 00:23:43.380 [2024-12-03 11:51:13.784719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787521] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:23:43.380 [2024-12-03 11:51:13.787564] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.380 [2024-12-03 11:51:13.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.787641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.787739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.787815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.787891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183f00 00:23:43.380 [2024-12-03 11:51:13.788151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183700 00:23:43.380 [2024-12-03 11:51:13.788757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.380 [2024-12-03 11:51:13.788772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184000 00:23:43.380 [2024-12-03 11:51:13.788783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183700 00:23:43.381 [2024-12-03 11:51:13.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183700 00:23:43.381 [2024-12-03 11:51:13.788837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.788864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.788893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.788920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.788947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.788974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.788990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba51000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126ba000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.789603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01a000 len:0x10000 key:0x184300 00:23:43.381 [2024-12-03 11:51:13.789615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.381 [2024-12-03 11:51:13.792355] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:23:43.381 [2024-12-03 11:51:13.792402] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.381 [2024-12-03 11:51:13.792446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:23:43.381 [2024-12-03 11:51:13.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.792568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.792646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.792724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.792803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.792881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.792926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.792959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.793037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.793466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.793495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.793574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.793895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.793948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.793974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.793990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184100 00:23:43.382 [2024-12-03 11:51:13.794001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.794027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.794042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183f00 00:23:43.382 [2024-12-03 11:51:13.794054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.794070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.794082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.794097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:23:43.382 [2024-12-03 11:51:13.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.382 [2024-12-03 11:51:13.794139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:23:43.383 [2024-12-03 11:51:13.794231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183f00 00:23:43.383 [2024-12-03 11:51:13.794256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183f00 00:23:43.383 [2024-12-03 11:51:13.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183200 00:23:43.383 [2024-12-03 11:51:13.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184100 00:23:43.383 [2024-12-03 11:51:13.794444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.794840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x184300 00:23:43.383 [2024-12-03 11:51:13.794852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8f96e000 sqhd:5310 p:0 m:0 dnr:0 00:23:43.383 [2024-12-03 11:51:13.811321] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:23:43.383 [2024-12-03 11:51:13.811342] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811394] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811412] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811425] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811436] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811448] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811460] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811472] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811484] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811496] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 [2024-12-03 11:51:13.811511] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:43.383 task offset: 87168 on job bdev=Nvme1n1 fails 00:23:43.383 00:23:43.383 Latency(us) 00:23:43.383 [2024-12-03T10:51:13.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme1n1 ended in about 1.97 seconds with error 00:23:43.383 Verification LBA range: start 0x0 length 0x400 00:23:43.383 Nvme1n1 : 1.97 331.61 20.73 32.55 0.00 175014.51 42572.19 1020054.73 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme2n1 ended in about 1.97 seconds with error 00:23:43.383 Verification LBA range: start 0x0 length 0x400 00:23:43.383 Nvme2n1 : 1.97 340.12 21.26 32.49 0.00 170436.03 41313.89 1026765.62 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.383 [2024-12-03T10:51:13.997Z] Job: Nvme3n1 ended in about 1.98 seconds with error 00:23:43.383 Verification LBA range: start 0x0 length 0x400 00:23:43.383 Nvme3n1 : 1.98 325.00 20.31 32.40 0.00 176791.55 42152.76 1093874.48 00:23:43.383 [2024-12-03T10:51:13.998Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme4n1 ended in about 1.98 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme4n1 : 1.98 318.88 19.93 32.34 0.00 179495.24 19922.94 1093874.48 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme5n1 ended in about 1.98 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme5n1 : 1.98 316.50 19.78 32.25 0.00 180236.88 44669.34 1093874.48 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme6n1 ended in about 1.99 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme6n1 : 1.99 315.71 19.73 32.17 0.00 180130.96 45508.20 1093874.48 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme7n1 ended in about 1.99 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme7n1 : 1.99 314.96 19.69 32.10 0.00 180008.53 46347.06 1093874.48 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme8n1 ended in about 2.00 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme8n1 : 2.00 314.22 19.64 32.02 0.00 179962.31 46137.34 1087163.60 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme9n1 ended in about 2.00 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme9n1 : 2.00 313.45 19.59 31.94 0.00 179910.64 44879.05 1087163.60 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.384 [2024-12-03T10:51:13.998Z] Job: Nvme10n1 ended in about 2.01 seconds with error 00:23:43.384 Verification LBA range: start 0x0 length 0x400 00:23:43.384 Nvme10n1 : 2.01 208.59 13.04 31.86 0.00 257315.99 44249.91 1087163.60 00:23:43.384 [2024-12-03T10:51:13.998Z] =================================================================================================================== 00:23:43.384 [2024-12-03T10:51:13.998Z] Total : 3099.04 193.69 322.14 0.00 183578.25 19922.94 1093874.48 00:23:43.384 [2024-12-03 11:51:13.833288] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:43.384 [2024-12-03 11:51:13.833312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:43.384 [2024-12-03 11:51:13.833501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:43.384 [2024-12-03 11:51:13.846641] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.846700] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.846743] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:23:43.384 [2024-12-03 11:51:13.846853] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.846888] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.846913] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:23:43.384 [2024-12-03 11:51:13.847034] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.847071] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.847095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:23:43.384 [2024-12-03 11:51:13.847249] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.847285] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.847309] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:23:43.384 [2024-12-03 11:51:13.847538] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.847577] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.847601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:23:43.384 [2024-12-03 11:51:13.847720] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.847756] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.847781] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:23:43.384 [2024-12-03 11:51:13.847902] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.847938] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.847963] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:23:43.384 [2024-12-03 11:51:13.848099] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.848156] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.848182] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:23:43.384 [2024-12-03 11:51:13.848346] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.848384] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.848409] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:23:43.384 [2024-12-03 11:51:13.848533] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.384 [2024-12-03 11:51:13.848570] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.384 [2024-12-03 11:51:13.848596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:23:43.643 11:51:14 -- target/shutdown.sh@141 -- # kill -9 3828251 00:23:43.643 11:51:14 -- target/shutdown.sh@143 -- # stoptarget 00:23:43.643 11:51:14 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:43.643 11:51:14 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:43.643 11:51:14 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.643 11:51:14 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:43.643 11:51:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:43.643 11:51:14 -- nvmf/common.sh@116 -- # sync 00:23:43.643 11:51:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:43.643 11:51:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:43.643 11:51:14 -- nvmf/common.sh@119 -- # set +e 00:23:43.643 11:51:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:43.643 11:51:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:43.643 rmmod nvme_rdma 00:23:43.643 rmmod nvme_fabrics 00:23:43.643 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 3828251 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:23:43.643 11:51:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:43.643 11:51:14 -- nvmf/common.sh@123 -- # set -e 00:23:43.643 11:51:14 -- nvmf/common.sh@124 -- # return 0 00:23:43.643 11:51:14 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:43.643 11:51:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:43.643 11:51:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:43.643 00:23:43.643 real 0m5.387s 00:23:43.643 user 0m18.457s 00:23:43.643 sys 0m1.354s 00:23:43.643 11:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:43.643 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.643 ************************************ 00:23:43.643 END TEST nvmf_shutdown_tc3 00:23:43.643 ************************************ 00:23:43.902 11:51:14 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:23:43.902 00:23:43.902 real 0m25.832s 00:23:43.902 user 1m15.581s 00:23:43.902 sys 0m9.318s 00:23:43.902 11:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:43.902 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.902 ************************************ 00:23:43.902 END TEST nvmf_shutdown 00:23:43.902 ************************************ 00:23:43.902 11:51:14 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:43.902 11:51:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.902 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.902 11:51:14 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:43.902 11:51:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.902 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.902 11:51:14 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:43.902 11:51:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:43.902 11:51:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.902 11:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.902 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.902 ************************************ 00:23:43.902 START TEST nvmf_multicontroller 00:23:43.902 ************************************ 00:23:43.902 11:51:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:43.902 * Looking for test storage... 00:23:43.902 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:43.902 11:51:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:43.902 11:51:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:43.902 11:51:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:44.162 11:51:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:44.162 11:51:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:44.162 11:51:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:44.162 11:51:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:44.162 11:51:14 -- scripts/common.sh@335 -- # IFS=.-: 00:23:44.162 11:51:14 -- scripts/common.sh@335 -- # read -ra ver1 00:23:44.162 11:51:14 -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.162 11:51:14 -- scripts/common.sh@336 -- # read -ra ver2 00:23:44.162 11:51:14 -- scripts/common.sh@337 -- # local 'op=<' 00:23:44.162 11:51:14 -- scripts/common.sh@339 -- # ver1_l=2 00:23:44.162 11:51:14 -- scripts/common.sh@340 -- # ver2_l=1 00:23:44.162 11:51:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:44.162 11:51:14 -- scripts/common.sh@343 -- # case "$op" in 00:23:44.162 11:51:14 -- scripts/common.sh@344 -- # : 1 00:23:44.162 11:51:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:44.162 11:51:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.162 11:51:14 -- scripts/common.sh@364 -- # decimal 1 00:23:44.162 11:51:14 -- scripts/common.sh@352 -- # local d=1 00:23:44.162 11:51:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.162 11:51:14 -- scripts/common.sh@354 -- # echo 1 00:23:44.162 11:51:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:44.162 11:51:14 -- scripts/common.sh@365 -- # decimal 2 00:23:44.162 11:51:14 -- scripts/common.sh@352 -- # local d=2 00:23:44.162 11:51:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.162 11:51:14 -- scripts/common.sh@354 -- # echo 2 00:23:44.162 11:51:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:44.162 11:51:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:44.162 11:51:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:44.162 11:51:14 -- scripts/common.sh@367 -- # return 0 00:23:44.162 11:51:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.162 11:51:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:44.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.162 --rc genhtml_branch_coverage=1 00:23:44.162 --rc genhtml_function_coverage=1 00:23:44.162 --rc genhtml_legend=1 00:23:44.162 --rc geninfo_all_blocks=1 00:23:44.162 --rc geninfo_unexecuted_blocks=1 00:23:44.162 00:23:44.162 ' 00:23:44.162 11:51:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:44.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.162 --rc genhtml_branch_coverage=1 00:23:44.162 --rc genhtml_function_coverage=1 00:23:44.162 --rc genhtml_legend=1 00:23:44.162 --rc geninfo_all_blocks=1 00:23:44.162 --rc geninfo_unexecuted_blocks=1 00:23:44.162 00:23:44.162 ' 00:23:44.162 11:51:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:44.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.162 --rc genhtml_branch_coverage=1 00:23:44.162 --rc genhtml_function_coverage=1 00:23:44.162 --rc genhtml_legend=1 00:23:44.162 --rc geninfo_all_blocks=1 00:23:44.162 --rc geninfo_unexecuted_blocks=1 00:23:44.162 00:23:44.162 ' 00:23:44.162 11:51:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:44.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.162 --rc genhtml_branch_coverage=1 00:23:44.162 --rc genhtml_function_coverage=1 00:23:44.162 --rc genhtml_legend=1 00:23:44.162 --rc geninfo_all_blocks=1 00:23:44.162 --rc geninfo_unexecuted_blocks=1 00:23:44.162 00:23:44.162 ' 00:23:44.162 11:51:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.162 11:51:14 -- nvmf/common.sh@7 -- # uname -s 00:23:44.162 11:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.162 11:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.162 11:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.162 11:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.162 11:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.162 11:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.162 11:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.162 11:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.162 11:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.162 11:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.162 11:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:44.162 11:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:44.162 11:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.162 11:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.162 11:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.162 11:51:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:44.162 11:51:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.162 11:51:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.162 11:51:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.162 11:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.162 11:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.162 11:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.162 11:51:14 -- paths/export.sh@5 -- # export PATH 00:23:44.162 11:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.162 11:51:14 -- nvmf/common.sh@46 -- # : 0 00:23:44.162 11:51:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:44.162 11:51:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:44.162 11:51:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:44.162 11:51:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.162 11:51:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.162 11:51:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:44.162 11:51:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:44.162 11:51:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:44.162 11:51:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:44.162 11:51:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:44.162 11:51:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:44.162 11:51:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:44.162 11:51:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.162 11:51:14 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:23:44.163 11:51:14 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:44.163 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:44.163 11:51:14 -- host/multicontroller.sh@20 -- # exit 0 00:23:44.163 00:23:44.163 real 0m0.220s 00:23:44.163 user 0m0.116s 00:23:44.163 sys 0m0.121s 00:23:44.163 11:51:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:44.163 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:44.163 ************************************ 00:23:44.163 END TEST nvmf_multicontroller 00:23:44.163 ************************************ 00:23:44.163 11:51:14 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:44.163 11:51:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:44.163 11:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:44.163 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:44.163 ************************************ 00:23:44.163 START TEST nvmf_aer 00:23:44.163 ************************************ 00:23:44.163 11:51:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:44.163 * Looking for test storage... 00:23:44.163 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:44.163 11:51:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:44.163 11:51:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:44.163 11:51:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:44.422 11:51:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:44.422 11:51:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:44.422 11:51:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:44.422 11:51:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:44.422 11:51:14 -- scripts/common.sh@335 -- # IFS=.-: 00:23:44.422 11:51:14 -- scripts/common.sh@335 -- # read -ra ver1 00:23:44.422 11:51:14 -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.422 11:51:14 -- scripts/common.sh@336 -- # read -ra ver2 00:23:44.422 11:51:14 -- scripts/common.sh@337 -- # local 'op=<' 00:23:44.422 11:51:14 -- scripts/common.sh@339 -- # ver1_l=2 00:23:44.422 11:51:14 -- scripts/common.sh@340 -- # ver2_l=1 00:23:44.422 11:51:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:44.422 11:51:14 -- scripts/common.sh@343 -- # case "$op" in 00:23:44.422 11:51:14 -- scripts/common.sh@344 -- # : 1 00:23:44.422 11:51:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:44.422 11:51:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.422 11:51:14 -- scripts/common.sh@364 -- # decimal 1 00:23:44.422 11:51:14 -- scripts/common.sh@352 -- # local d=1 00:23:44.422 11:51:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.422 11:51:14 -- scripts/common.sh@354 -- # echo 1 00:23:44.422 11:51:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:44.422 11:51:14 -- scripts/common.sh@365 -- # decimal 2 00:23:44.422 11:51:14 -- scripts/common.sh@352 -- # local d=2 00:23:44.422 11:51:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.422 11:51:14 -- scripts/common.sh@354 -- # echo 2 00:23:44.422 11:51:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:44.422 11:51:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:44.422 11:51:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:44.422 11:51:14 -- scripts/common.sh@367 -- # return 0 00:23:44.422 11:51:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.422 11:51:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:44.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.422 --rc genhtml_branch_coverage=1 00:23:44.422 --rc genhtml_function_coverage=1 00:23:44.422 --rc genhtml_legend=1 00:23:44.422 --rc geninfo_all_blocks=1 00:23:44.422 --rc geninfo_unexecuted_blocks=1 00:23:44.422 00:23:44.422 ' 00:23:44.422 11:51:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:44.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.422 --rc genhtml_branch_coverage=1 00:23:44.422 --rc genhtml_function_coverage=1 00:23:44.422 --rc genhtml_legend=1 00:23:44.422 --rc geninfo_all_blocks=1 00:23:44.422 --rc geninfo_unexecuted_blocks=1 00:23:44.422 00:23:44.422 ' 00:23:44.422 11:51:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:44.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.422 --rc genhtml_branch_coverage=1 00:23:44.422 --rc genhtml_function_coverage=1 00:23:44.422 --rc genhtml_legend=1 00:23:44.422 --rc geninfo_all_blocks=1 00:23:44.422 --rc geninfo_unexecuted_blocks=1 00:23:44.422 00:23:44.422 ' 00:23:44.422 11:51:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:44.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.422 --rc genhtml_branch_coverage=1 00:23:44.422 --rc genhtml_function_coverage=1 00:23:44.422 --rc genhtml_legend=1 00:23:44.422 --rc geninfo_all_blocks=1 00:23:44.422 --rc geninfo_unexecuted_blocks=1 00:23:44.422 00:23:44.422 ' 00:23:44.422 11:51:14 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.422 11:51:14 -- nvmf/common.sh@7 -- # uname -s 00:23:44.422 11:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.422 11:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.422 11:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.422 11:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.422 11:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.422 11:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.422 11:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.422 11:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.422 11:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.422 11:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.422 11:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:44.422 11:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:44.422 11:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.422 11:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.422 11:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.422 11:51:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:44.422 11:51:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.422 11:51:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.422 11:51:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.422 11:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.422 11:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.423 11:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.423 11:51:14 -- paths/export.sh@5 -- # export PATH 00:23:44.423 11:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.423 11:51:14 -- nvmf/common.sh@46 -- # : 0 00:23:44.423 11:51:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:44.423 11:51:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:44.423 11:51:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:44.423 11:51:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.423 11:51:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.423 11:51:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:44.423 11:51:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:44.423 11:51:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:44.423 11:51:14 -- host/aer.sh@11 -- # nvmftestinit 00:23:44.423 11:51:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:44.423 11:51:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.423 11:51:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:44.423 11:51:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:44.423 11:51:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:44.423 11:51:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.423 11:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.423 11:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.423 11:51:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:44.423 11:51:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:44.423 11:51:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:44.423 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:23:50.983 11:51:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:50.983 11:51:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:50.983 11:51:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:50.983 11:51:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:50.983 11:51:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:50.983 11:51:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:50.983 11:51:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:50.983 11:51:21 -- nvmf/common.sh@294 -- # net_devs=() 00:23:50.983 11:51:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:50.983 11:51:21 -- nvmf/common.sh@295 -- # e810=() 00:23:50.983 11:51:21 -- nvmf/common.sh@295 -- # local -ga e810 00:23:50.983 11:51:21 -- nvmf/common.sh@296 -- # x722=() 00:23:50.983 11:51:21 -- nvmf/common.sh@296 -- # local -ga x722 00:23:50.983 11:51:21 -- nvmf/common.sh@297 -- # mlx=() 00:23:50.983 11:51:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:50.983 11:51:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.983 11:51:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:50.983 11:51:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:50.983 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:50.983 11:51:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:50.983 11:51:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:50.983 11:51:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:50.983 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:50.983 11:51:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:50.983 11:51:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:50.983 11:51:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.983 11:51:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.983 11:51:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:50.983 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:50.983 11:51:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:50.983 11:51:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.983 11:51:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.983 11:51:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:50.983 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:50.983 11:51:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.983 11:51:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:50.983 11:51:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:50.983 11:51:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:50.983 11:51:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:50.983 11:51:21 -- nvmf/common.sh@57 -- # uname 00:23:50.983 11:51:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:50.983 11:51:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:50.983 11:51:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:50.983 11:51:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:50.983 11:51:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:50.983 11:51:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:50.983 11:51:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:50.983 11:51:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:50.983 11:51:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:50.983 11:51:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:50.983 11:51:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:50.983 11:51:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:50.983 11:51:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:50.983 11:51:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:50.983 11:51:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:50.983 11:51:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:50.983 11:51:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@104 -- # continue 2 00:23:50.984 11:51:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@104 -- # continue 2 00:23:50.984 11:51:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:50.984 11:51:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:50.984 11:51:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:50.984 11:51:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:50.984 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:50.984 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:50.984 altname enp217s0f0np0 00:23:50.984 altname ens818f0np0 00:23:50.984 inet 192.168.100.8/24 scope global mlx_0_0 00:23:50.984 valid_lft forever preferred_lft forever 00:23:50.984 11:51:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:50.984 11:51:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:50.984 11:51:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:50.984 11:51:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:50.984 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:50.984 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:50.984 altname enp217s0f1np1 00:23:50.984 altname ens818f1np1 00:23:50.984 inet 192.168.100.9/24 scope global mlx_0_1 00:23:50.984 valid_lft forever preferred_lft forever 00:23:50.984 11:51:21 -- nvmf/common.sh@410 -- # return 0 00:23:50.984 11:51:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:50.984 11:51:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:50.984 11:51:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:50.984 11:51:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:50.984 11:51:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:50.984 11:51:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:50.984 11:51:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:50.984 11:51:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:50.984 11:51:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:50.984 11:51:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@104 -- # continue 2 00:23:50.984 11:51:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:50.984 11:51:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:50.984 11:51:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@104 -- # continue 2 00:23:50.984 11:51:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:50.984 11:51:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:50.984 11:51:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:50.984 11:51:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:50.984 11:51:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:50.984 11:51:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:50.984 192.168.100.9' 00:23:50.984 11:51:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:50.984 192.168.100.9' 00:23:50.984 11:51:21 -- nvmf/common.sh@445 -- # head -n 1 00:23:50.984 11:51:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:50.984 11:51:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:50.984 192.168.100.9' 00:23:50.984 11:51:21 -- nvmf/common.sh@446 -- # tail -n +2 00:23:50.984 11:51:21 -- nvmf/common.sh@446 -- # head -n 1 00:23:50.984 11:51:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:50.984 11:51:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:50.984 11:51:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:50.984 11:51:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:50.984 11:51:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:50.984 11:51:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:51.243 11:51:21 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:51.243 11:51:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:51.243 11:51:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.243 11:51:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.243 11:51:21 -- nvmf/common.sh@469 -- # nvmfpid=3832360 00:23:51.243 11:51:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.243 11:51:21 -- nvmf/common.sh@470 -- # waitforlisten 3832360 00:23:51.243 11:51:21 -- common/autotest_common.sh@829 -- # '[' -z 3832360 ']' 00:23:51.243 11:51:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.243 11:51:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.243 11:51:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.243 11:51:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.243 11:51:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.243 [2024-12-03 11:51:21.666912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:51.243 [2024-12-03 11:51:21.666971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.243 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.243 [2024-12-03 11:51:21.736284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.243 [2024-12-03 11:51:21.806154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:51.243 [2024-12-03 11:51:21.806268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.243 [2024-12-03 11:51:21.806278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.243 [2024-12-03 11:51:21.806286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.243 [2024-12-03 11:51:21.806340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.243 [2024-12-03 11:51:21.806438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.243 [2024-12-03 11:51:21.806523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.243 [2024-12-03 11:51:21.806525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.178 11:51:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.178 11:51:22 -- common/autotest_common.sh@862 -- # return 0 00:23:52.178 11:51:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:52.178 11:51:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:52.178 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 11:51:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.178 11:51:22 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:52.178 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.178 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 [2024-12-03 11:51:22.564674] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1754090/0x1758580) succeed. 00:23:52.178 [2024-12-03 11:51:22.573763] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1755680/0x1799c20) succeed. 00:23:52.178 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.178 11:51:22 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:52.178 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.178 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 Malloc0 00:23:52.178 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.178 11:51:22 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:52.178 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.178 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.178 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.178 11:51:22 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.179 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.179 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.179 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.179 11:51:22 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:52.179 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.179 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.179 [2024-12-03 11:51:22.742920] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:52.179 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.179 11:51:22 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:52.179 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.179 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.179 [2024-12-03 11:51:22.750540] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:52.179 [ 00:23:52.179 { 00:23:52.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.179 "subtype": "Discovery", 00:23:52.179 "listen_addresses": [], 00:23:52.179 "allow_any_host": true, 00:23:52.179 "hosts": [] 00:23:52.179 }, 00:23:52.179 { 00:23:52.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.179 "subtype": "NVMe", 00:23:52.179 "listen_addresses": [ 00:23:52.179 { 00:23:52.179 "transport": "RDMA", 00:23:52.179 "trtype": "RDMA", 00:23:52.179 "adrfam": "IPv4", 00:23:52.179 "traddr": "192.168.100.8", 00:23:52.179 "trsvcid": "4420" 00:23:52.179 } 00:23:52.179 ], 00:23:52.179 "allow_any_host": true, 00:23:52.179 "hosts": [], 00:23:52.179 "serial_number": "SPDK00000000000001", 00:23:52.179 "model_number": "SPDK bdev Controller", 00:23:52.179 "max_namespaces": 2, 00:23:52.179 "min_cntlid": 1, 00:23:52.179 "max_cntlid": 65519, 00:23:52.179 "namespaces": [ 00:23:52.179 { 00:23:52.179 "nsid": 1, 00:23:52.179 "bdev_name": "Malloc0", 00:23:52.179 "name": "Malloc0", 00:23:52.179 "nguid": "B8CBAA1BEA8241269C616BBFE9C35FBC", 00:23:52.179 "uuid": "b8cbaa1b-ea82-4126-9c61-6bbfe9c35fbc" 00:23:52.179 } 00:23:52.179 ] 00:23:52.179 } 00:23:52.179 ] 00:23:52.179 11:51:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.179 11:51:22 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:52.179 11:51:22 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:52.179 11:51:22 -- host/aer.sh@33 -- # aerpid=3832649 00:23:52.179 11:51:22 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:52.179 11:51:22 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:52.179 11:51:22 -- common/autotest_common.sh@1254 -- # local i=0 00:23:52.179 11:51:22 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.179 11:51:22 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:23:52.179 11:51:22 -- common/autotest_common.sh@1257 -- # i=1 00:23:52.179 11:51:22 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:52.437 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.437 11:51:22 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.437 11:51:22 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:23:52.437 11:51:22 -- common/autotest_common.sh@1257 -- # i=2 00:23:52.437 11:51:22 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:52.437 11:51:22 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.437 11:51:22 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.437 11:51:22 -- common/autotest_common.sh@1265 -- # return 0 00:23:52.437 11:51:22 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:52.437 11:51:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.437 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.437 Malloc1 00:23:52.437 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.437 11:51:23 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:52.437 11:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.437 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.437 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.437 11:51:23 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:52.437 11:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.437 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.437 [ 00:23:52.437 { 00:23:52.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.437 "subtype": "Discovery", 00:23:52.437 "listen_addresses": [], 00:23:52.437 "allow_any_host": true, 00:23:52.437 "hosts": [] 00:23:52.437 }, 00:23:52.437 { 00:23:52.437 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.437 "subtype": "NVMe", 00:23:52.437 "listen_addresses": [ 00:23:52.437 { 00:23:52.437 "transport": "RDMA", 00:23:52.437 "trtype": "RDMA", 00:23:52.437 "adrfam": "IPv4", 00:23:52.437 "traddr": "192.168.100.8", 00:23:52.437 "trsvcid": "4420" 00:23:52.437 } 00:23:52.437 ], 00:23:52.437 "allow_any_host": true, 00:23:52.437 "hosts": [], 00:23:52.437 "serial_number": "SPDK00000000000001", 00:23:52.437 "model_number": "SPDK bdev Controller", 00:23:52.437 "max_namespaces": 2, 00:23:52.437 "min_cntlid": 1, 00:23:52.437 "max_cntlid": 65519, 00:23:52.437 "namespaces": [ 00:23:52.437 { 00:23:52.437 "nsid": 1, 00:23:52.437 "bdev_name": "Malloc0", 00:23:52.437 "name": "Malloc0", 00:23:52.437 "nguid": "B8CBAA1BEA8241269C616BBFE9C35FBC", 00:23:52.437 "uuid": "b8cbaa1b-ea82-4126-9c61-6bbfe9c35fbc" 00:23:52.437 }, 00:23:52.437 { 00:23:52.437 "nsid": 2, 00:23:52.437 "bdev_name": "Malloc1", 00:23:52.437 "name": "Malloc1", 00:23:52.437 "nguid": "E68F6A5F89AC49FE9EE4774C697E1853", 00:23:52.437 "uuid": "e68f6a5f-89ac-49fe-9ee4-774c697e1853" 00:23:52.437 } 00:23:52.696 ] 00:23:52.696 } 00:23:52.696 ] 00:23:52.696 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.696 11:51:23 -- host/aer.sh@43 -- # wait 3832649 00:23:52.696 Asynchronous Event Request test 00:23:52.696 Attaching to 192.168.100.8 00:23:52.696 Attached to 192.168.100.8 00:23:52.696 Registering asynchronous event callbacks... 00:23:52.696 Starting namespace attribute notice tests for all controllers... 00:23:52.696 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:52.696 aer_cb - Changed Namespace 00:23:52.696 Cleaning up... 00:23:52.696 11:51:23 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:52.696 11:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.696 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.696 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.696 11:51:23 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:52.696 11:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.696 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.696 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.696 11:51:23 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.696 11:51:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.696 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.696 11:51:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.696 11:51:23 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:52.696 11:51:23 -- host/aer.sh@51 -- # nvmftestfini 00:23:52.696 11:51:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:52.696 11:51:23 -- nvmf/common.sh@116 -- # sync 00:23:52.696 11:51:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:52.696 11:51:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:52.696 11:51:23 -- nvmf/common.sh@119 -- # set +e 00:23:52.696 11:51:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:52.696 11:51:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:52.696 rmmod nvme_rdma 00:23:52.696 rmmod nvme_fabrics 00:23:52.696 11:51:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:52.696 11:51:23 -- nvmf/common.sh@123 -- # set -e 00:23:52.696 11:51:23 -- nvmf/common.sh@124 -- # return 0 00:23:52.696 11:51:23 -- nvmf/common.sh@477 -- # '[' -n 3832360 ']' 00:23:52.696 11:51:23 -- nvmf/common.sh@478 -- # killprocess 3832360 00:23:52.696 11:51:23 -- common/autotest_common.sh@936 -- # '[' -z 3832360 ']' 00:23:52.696 11:51:23 -- common/autotest_common.sh@940 -- # kill -0 3832360 00:23:52.696 11:51:23 -- common/autotest_common.sh@941 -- # uname 00:23:52.696 11:51:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:52.696 11:51:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3832360 00:23:52.696 11:51:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:52.696 11:51:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:52.696 11:51:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3832360' 00:23:52.696 killing process with pid 3832360 00:23:52.696 11:51:23 -- common/autotest_common.sh@955 -- # kill 3832360 00:23:52.696 [2024-12-03 11:51:23.255061] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:52.696 11:51:23 -- common/autotest_common.sh@960 -- # wait 3832360 00:23:52.954 11:51:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:52.954 11:51:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:52.954 00:23:52.954 real 0m8.855s 00:23:52.954 user 0m8.661s 00:23:52.954 sys 0m5.636s 00:23:52.954 11:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:52.954 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.954 ************************************ 00:23:52.954 END TEST nvmf_aer 00:23:52.954 ************************************ 00:23:53.212 11:51:23 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:53.212 11:51:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:53.212 11:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.212 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:53.212 ************************************ 00:23:53.212 START TEST nvmf_async_init 00:23:53.212 ************************************ 00:23:53.212 11:51:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:53.212 * Looking for test storage... 00:23:53.212 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:53.212 11:51:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:53.212 11:51:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:53.212 11:51:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:53.212 11:51:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:53.212 11:51:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:53.212 11:51:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:53.212 11:51:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:53.212 11:51:23 -- scripts/common.sh@335 -- # IFS=.-: 00:23:53.212 11:51:23 -- scripts/common.sh@335 -- # read -ra ver1 00:23:53.212 11:51:23 -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.212 11:51:23 -- scripts/common.sh@336 -- # read -ra ver2 00:23:53.212 11:51:23 -- scripts/common.sh@337 -- # local 'op=<' 00:23:53.212 11:51:23 -- scripts/common.sh@339 -- # ver1_l=2 00:23:53.212 11:51:23 -- scripts/common.sh@340 -- # ver2_l=1 00:23:53.212 11:51:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:53.212 11:51:23 -- scripts/common.sh@343 -- # case "$op" in 00:23:53.212 11:51:23 -- scripts/common.sh@344 -- # : 1 00:23:53.212 11:51:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:53.212 11:51:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.212 11:51:23 -- scripts/common.sh@364 -- # decimal 1 00:23:53.212 11:51:23 -- scripts/common.sh@352 -- # local d=1 00:23:53.213 11:51:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.213 11:51:23 -- scripts/common.sh@354 -- # echo 1 00:23:53.213 11:51:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:53.213 11:51:23 -- scripts/common.sh@365 -- # decimal 2 00:23:53.213 11:51:23 -- scripts/common.sh@352 -- # local d=2 00:23:53.213 11:51:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.213 11:51:23 -- scripts/common.sh@354 -- # echo 2 00:23:53.213 11:51:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:53.213 11:51:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:53.213 11:51:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:53.213 11:51:23 -- scripts/common.sh@367 -- # return 0 00:23:53.213 11:51:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.213 11:51:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:53.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.213 --rc genhtml_branch_coverage=1 00:23:53.213 --rc genhtml_function_coverage=1 00:23:53.213 --rc genhtml_legend=1 00:23:53.213 --rc geninfo_all_blocks=1 00:23:53.213 --rc geninfo_unexecuted_blocks=1 00:23:53.213 00:23:53.213 ' 00:23:53.213 11:51:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:53.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.213 --rc genhtml_branch_coverage=1 00:23:53.213 --rc genhtml_function_coverage=1 00:23:53.213 --rc genhtml_legend=1 00:23:53.213 --rc geninfo_all_blocks=1 00:23:53.213 --rc geninfo_unexecuted_blocks=1 00:23:53.213 00:23:53.213 ' 00:23:53.213 11:51:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:53.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.213 --rc genhtml_branch_coverage=1 00:23:53.213 --rc genhtml_function_coverage=1 00:23:53.213 --rc genhtml_legend=1 00:23:53.213 --rc geninfo_all_blocks=1 00:23:53.213 --rc geninfo_unexecuted_blocks=1 00:23:53.213 00:23:53.213 ' 00:23:53.213 11:51:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:53.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.213 --rc genhtml_branch_coverage=1 00:23:53.213 --rc genhtml_function_coverage=1 00:23:53.213 --rc genhtml_legend=1 00:23:53.213 --rc geninfo_all_blocks=1 00:23:53.213 --rc geninfo_unexecuted_blocks=1 00:23:53.213 00:23:53.213 ' 00:23:53.213 11:51:23 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.213 11:51:23 -- nvmf/common.sh@7 -- # uname -s 00:23:53.213 11:51:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.213 11:51:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.213 11:51:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.213 11:51:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.213 11:51:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.213 11:51:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.213 11:51:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.213 11:51:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.213 11:51:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.213 11:51:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.213 11:51:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:53.213 11:51:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:53.213 11:51:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.213 11:51:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.213 11:51:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.213 11:51:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:53.213 11:51:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.213 11:51:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.213 11:51:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.213 11:51:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.213 11:51:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.213 11:51:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.213 11:51:23 -- paths/export.sh@5 -- # export PATH 00:23:53.213 11:51:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.213 11:51:23 -- nvmf/common.sh@46 -- # : 0 00:23:53.213 11:51:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:53.213 11:51:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:53.213 11:51:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:53.213 11:51:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.213 11:51:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.213 11:51:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:53.213 11:51:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:53.213 11:51:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:53.213 11:51:23 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:53.213 11:51:23 -- host/async_init.sh@14 -- # null_block_size=512 00:23:53.213 11:51:23 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:53.213 11:51:23 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:53.213 11:51:23 -- host/async_init.sh@20 -- # uuidgen 00:23:53.213 11:51:23 -- host/async_init.sh@20 -- # tr -d - 00:23:53.213 11:51:23 -- host/async_init.sh@20 -- # nguid=2e95116dc2394f2fba6b91b70d54200e 00:23:53.213 11:51:23 -- host/async_init.sh@22 -- # nvmftestinit 00:23:53.213 11:51:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:53.213 11:51:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.213 11:51:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:53.213 11:51:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:53.213 11:51:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:53.213 11:51:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.213 11:51:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.213 11:51:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.213 11:51:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:53.213 11:51:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:53.213 11:51:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:53.213 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:23:59.767 11:51:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.767 11:51:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:59.767 11:51:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:59.767 11:51:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:59.767 11:51:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:59.767 11:51:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:59.767 11:51:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:59.767 11:51:30 -- nvmf/common.sh@294 -- # net_devs=() 00:23:59.767 11:51:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:59.767 11:51:30 -- nvmf/common.sh@295 -- # e810=() 00:23:59.767 11:51:30 -- nvmf/common.sh@295 -- # local -ga e810 00:23:59.767 11:51:30 -- nvmf/common.sh@296 -- # x722=() 00:23:59.767 11:51:30 -- nvmf/common.sh@296 -- # local -ga x722 00:23:59.767 11:51:30 -- nvmf/common.sh@297 -- # mlx=() 00:23:59.767 11:51:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:59.767 11:51:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.767 11:51:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:59.767 11:51:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:59.767 11:51:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:59.767 11:51:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:59.767 11:51:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:59.767 11:51:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.767 11:51:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:59.767 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:59.767 11:51:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:59.767 11:51:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.767 11:51:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:59.767 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:59.767 11:51:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:59.767 11:51:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:59.767 11:51:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:59.767 11:51:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.767 11:51:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.767 11:51:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.767 11:51:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.767 11:51:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:59.767 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:59.767 11:51:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.767 11:51:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.768 11:51:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.768 11:51:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.768 11:51:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:59.768 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:59.768 11:51:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.768 11:51:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:59.768 11:51:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:59.768 11:51:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:59.768 11:51:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:59.768 11:51:30 -- nvmf/common.sh@57 -- # uname 00:23:59.768 11:51:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:59.768 11:51:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:59.768 11:51:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:59.768 11:51:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:59.768 11:51:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:59.768 11:51:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:59.768 11:51:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:59.768 11:51:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:59.768 11:51:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:59.768 11:51:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:59.768 11:51:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:59.768 11:51:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:59.768 11:51:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:59.768 11:51:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:59.768 11:51:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:59.768 11:51:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:59.768 11:51:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:59.768 11:51:30 -- nvmf/common.sh@104 -- # continue 2 00:23:59.768 11:51:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.768 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:59.768 11:51:30 -- nvmf/common.sh@104 -- # continue 2 00:23:59.768 11:51:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:59.768 11:51:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:59.768 11:51:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:59.768 11:51:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:59.768 11:51:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:59.768 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:59.768 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:59.768 altname enp217s0f0np0 00:23:59.768 altname ens818f0np0 00:23:59.768 inet 192.168.100.8/24 scope global mlx_0_0 00:23:59.768 valid_lft forever preferred_lft forever 00:23:59.768 11:51:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:59.768 11:51:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:59.768 11:51:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:59.768 11:51:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:59.768 11:51:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:59.768 11:51:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:59.768 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:59.768 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:59.768 altname enp217s0f1np1 00:23:59.768 altname ens818f1np1 00:23:59.768 inet 192.168.100.9/24 scope global mlx_0_1 00:23:59.768 valid_lft forever preferred_lft forever 00:23:59.768 11:51:30 -- nvmf/common.sh@410 -- # return 0 00:23:59.768 11:51:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:59.768 11:51:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:59.768 11:51:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:59.768 11:51:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:00.027 11:51:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:00.027 11:51:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:00.027 11:51:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:00.027 11:51:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:00.027 11:51:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:00.027 11:51:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:00.027 11:51:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.027 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.027 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:00.027 11:51:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:00.027 11:51:30 -- nvmf/common.sh@104 -- # continue 2 00:24:00.027 11:51:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.027 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.027 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:00.027 11:51:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.027 11:51:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:00.027 11:51:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:00.027 11:51:30 -- nvmf/common.sh@104 -- # continue 2 00:24:00.027 11:51:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:00.027 11:51:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:00.027 11:51:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.027 11:51:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:00.027 11:51:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:00.027 11:51:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.027 11:51:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.027 11:51:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:00.027 192.168.100.9' 00:24:00.027 11:51:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:00.027 192.168.100.9' 00:24:00.027 11:51:30 -- nvmf/common.sh@445 -- # head -n 1 00:24:00.027 11:51:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:00.027 11:51:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:00.027 192.168.100.9' 00:24:00.027 11:51:30 -- nvmf/common.sh@446 -- # tail -n +2 00:24:00.027 11:51:30 -- nvmf/common.sh@446 -- # head -n 1 00:24:00.027 11:51:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:00.027 11:51:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:00.027 11:51:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:00.027 11:51:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:00.027 11:51:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:00.027 11:51:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:00.027 11:51:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:00.027 11:51:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:00.027 11:51:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.027 11:51:30 -- common/autotest_common.sh@10 -- # set +x 00:24:00.027 11:51:30 -- nvmf/common.sh@469 -- # nvmfpid=3836035 00:24:00.027 11:51:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:00.027 11:51:30 -- nvmf/common.sh@470 -- # waitforlisten 3836035 00:24:00.027 11:51:30 -- common/autotest_common.sh@829 -- # '[' -z 3836035 ']' 00:24:00.027 11:51:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.027 11:51:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.028 11:51:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.028 11:51:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.028 11:51:30 -- common/autotest_common.sh@10 -- # set +x 00:24:00.028 [2024-12-03 11:51:30.543902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:00.028 [2024-12-03 11:51:30.543952] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.028 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.028 [2024-12-03 11:51:30.614792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.286 [2024-12-03 11:51:30.689665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:00.286 [2024-12-03 11:51:30.689774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.286 [2024-12-03 11:51:30.689785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.286 [2024-12-03 11:51:30.689794] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.286 [2024-12-03 11:51:30.689817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.854 11:51:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.854 11:51:31 -- common/autotest_common.sh@862 -- # return 0 00:24:00.854 11:51:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:00.854 11:51:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.854 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:00.854 11:51:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.854 11:51:31 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:00.854 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.854 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:00.854 [2024-12-03 11:51:31.430606] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x233bf30/0x2340420) succeed. 00:24:00.854 [2024-12-03 11:51:31.439574] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x233d430/0x2381ac0) succeed. 00:24:01.112 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.112 11:51:31 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:01.112 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.112 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.112 null0 00:24:01.112 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.112 11:51:31 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:01.112 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.112 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.112 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.112 11:51:31 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:01.112 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.112 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2e95116dc2394f2fba6b91b70d54200e 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 [2024-12-03 11:51:31.531407] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 nvme0n1 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 [ 00:24:01.113 { 00:24:01.113 "name": "nvme0n1", 00:24:01.113 "aliases": [ 00:24:01.113 "2e95116d-c239-4f2f-ba6b-91b70d54200e" 00:24:01.113 ], 00:24:01.113 "product_name": "NVMe disk", 00:24:01.113 "block_size": 512, 00:24:01.113 "num_blocks": 2097152, 00:24:01.113 "uuid": "2e95116d-c239-4f2f-ba6b-91b70d54200e", 00:24:01.113 "assigned_rate_limits": { 00:24:01.113 "rw_ios_per_sec": 0, 00:24:01.113 "rw_mbytes_per_sec": 0, 00:24:01.113 "r_mbytes_per_sec": 0, 00:24:01.113 "w_mbytes_per_sec": 0 00:24:01.113 }, 00:24:01.113 "claimed": false, 00:24:01.113 "zoned": false, 00:24:01.113 "supported_io_types": { 00:24:01.113 "read": true, 00:24:01.113 "write": true, 00:24:01.113 "unmap": false, 00:24:01.113 "write_zeroes": true, 00:24:01.113 "flush": true, 00:24:01.113 "reset": true, 00:24:01.113 "compare": true, 00:24:01.113 "compare_and_write": true, 00:24:01.113 "abort": true, 00:24:01.113 "nvme_admin": true, 00:24:01.113 "nvme_io": true 00:24:01.113 }, 00:24:01.113 "memory_domains": [ 00:24:01.113 { 00:24:01.113 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:01.113 "dma_device_type": 0 00:24:01.113 } 00:24:01.113 ], 00:24:01.113 "driver_specific": { 00:24:01.113 "nvme": [ 00:24:01.113 { 00:24:01.113 "trid": { 00:24:01.113 "trtype": "RDMA", 00:24:01.113 "adrfam": "IPv4", 00:24:01.113 "traddr": "192.168.100.8", 00:24:01.113 "trsvcid": "4420", 00:24:01.113 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.113 }, 00:24:01.113 "ctrlr_data": { 00:24:01.113 "cntlid": 1, 00:24:01.113 "vendor_id": "0x8086", 00:24:01.113 "model_number": "SPDK bdev Controller", 00:24:01.113 "serial_number": "00000000000000000000", 00:24:01.113 "firmware_revision": "24.01.1", 00:24:01.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.113 "oacs": { 00:24:01.113 "security": 0, 00:24:01.113 "format": 0, 00:24:01.113 "firmware": 0, 00:24:01.113 "ns_manage": 0 00:24:01.113 }, 00:24:01.113 "multi_ctrlr": true, 00:24:01.113 "ana_reporting": false 00:24:01.113 }, 00:24:01.113 "vs": { 00:24:01.113 "nvme_version": "1.3" 00:24:01.113 }, 00:24:01.113 "ns_data": { 00:24:01.113 "id": 1, 00:24:01.113 "can_share": true 00:24:01.113 } 00:24:01.113 } 00:24:01.113 ], 00:24:01.113 "mp_policy": "active_passive" 00:24:01.113 } 00:24:01.113 } 00:24:01.113 ] 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 [2024-12-03 11:51:31.636549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:01.113 [2024-12-03 11:51:31.657181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.113 [2024-12-03 11:51:31.682034] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.113 [ 00:24:01.113 { 00:24:01.113 "name": "nvme0n1", 00:24:01.113 "aliases": [ 00:24:01.113 "2e95116d-c239-4f2f-ba6b-91b70d54200e" 00:24:01.113 ], 00:24:01.113 "product_name": "NVMe disk", 00:24:01.113 "block_size": 512, 00:24:01.113 "num_blocks": 2097152, 00:24:01.113 "uuid": "2e95116d-c239-4f2f-ba6b-91b70d54200e", 00:24:01.113 "assigned_rate_limits": { 00:24:01.113 "rw_ios_per_sec": 0, 00:24:01.113 "rw_mbytes_per_sec": 0, 00:24:01.113 "r_mbytes_per_sec": 0, 00:24:01.113 "w_mbytes_per_sec": 0 00:24:01.113 }, 00:24:01.113 "claimed": false, 00:24:01.113 "zoned": false, 00:24:01.113 "supported_io_types": { 00:24:01.113 "read": true, 00:24:01.113 "write": true, 00:24:01.113 "unmap": false, 00:24:01.113 "write_zeroes": true, 00:24:01.113 "flush": true, 00:24:01.113 "reset": true, 00:24:01.113 "compare": true, 00:24:01.113 "compare_and_write": true, 00:24:01.113 "abort": true, 00:24:01.113 "nvme_admin": true, 00:24:01.113 "nvme_io": true 00:24:01.113 }, 00:24:01.113 "memory_domains": [ 00:24:01.113 { 00:24:01.113 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:01.113 "dma_device_type": 0 00:24:01.113 } 00:24:01.113 ], 00:24:01.113 "driver_specific": { 00:24:01.113 "nvme": [ 00:24:01.113 { 00:24:01.113 "trid": { 00:24:01.113 "trtype": "RDMA", 00:24:01.113 "adrfam": "IPv4", 00:24:01.113 "traddr": "192.168.100.8", 00:24:01.113 "trsvcid": "4420", 00:24:01.113 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.113 }, 00:24:01.113 "ctrlr_data": { 00:24:01.113 "cntlid": 2, 00:24:01.113 "vendor_id": "0x8086", 00:24:01.113 "model_number": "SPDK bdev Controller", 00:24:01.113 "serial_number": "00000000000000000000", 00:24:01.113 "firmware_revision": "24.01.1", 00:24:01.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.113 "oacs": { 00:24:01.113 "security": 0, 00:24:01.113 "format": 0, 00:24:01.113 "firmware": 0, 00:24:01.113 "ns_manage": 0 00:24:01.113 }, 00:24:01.113 "multi_ctrlr": true, 00:24:01.113 "ana_reporting": false 00:24:01.113 }, 00:24:01.113 "vs": { 00:24:01.113 "nvme_version": "1.3" 00:24:01.113 }, 00:24:01.113 "ns_data": { 00:24:01.113 "id": 1, 00:24:01.113 "can_share": true 00:24:01.113 } 00:24:01.113 } 00:24:01.113 ], 00:24:01.113 "mp_policy": "active_passive" 00:24:01.113 } 00:24:01.113 } 00:24:01.113 ] 00:24:01.113 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.113 11:51:31 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.113 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.113 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@53 -- # mktemp 00:24:01.372 11:51:31 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tIA5qKL1Ld 00:24:01.372 11:51:31 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:01.372 11:51:31 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tIA5qKL1Ld 00:24:01.372 11:51:31 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 [2024-12-03 11:51:31.765643] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tIA5qKL1Ld 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tIA5qKL1Ld 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 [2024-12-03 11:51:31.781669] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.372 nvme0n1 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 [ 00:24:01.372 { 00:24:01.372 "name": "nvme0n1", 00:24:01.372 "aliases": [ 00:24:01.372 "2e95116d-c239-4f2f-ba6b-91b70d54200e" 00:24:01.372 ], 00:24:01.372 "product_name": "NVMe disk", 00:24:01.372 "block_size": 512, 00:24:01.372 "num_blocks": 2097152, 00:24:01.372 "uuid": "2e95116d-c239-4f2f-ba6b-91b70d54200e", 00:24:01.372 "assigned_rate_limits": { 00:24:01.372 "rw_ios_per_sec": 0, 00:24:01.372 "rw_mbytes_per_sec": 0, 00:24:01.372 "r_mbytes_per_sec": 0, 00:24:01.372 "w_mbytes_per_sec": 0 00:24:01.372 }, 00:24:01.372 "claimed": false, 00:24:01.372 "zoned": false, 00:24:01.372 "supported_io_types": { 00:24:01.372 "read": true, 00:24:01.372 "write": true, 00:24:01.372 "unmap": false, 00:24:01.372 "write_zeroes": true, 00:24:01.372 "flush": true, 00:24:01.372 "reset": true, 00:24:01.372 "compare": true, 00:24:01.372 "compare_and_write": true, 00:24:01.372 "abort": true, 00:24:01.372 "nvme_admin": true, 00:24:01.372 "nvme_io": true 00:24:01.372 }, 00:24:01.372 "memory_domains": [ 00:24:01.372 { 00:24:01.372 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:01.372 "dma_device_type": 0 00:24:01.372 } 00:24:01.372 ], 00:24:01.372 "driver_specific": { 00:24:01.372 "nvme": [ 00:24:01.372 { 00:24:01.372 "trid": { 00:24:01.372 "trtype": "RDMA", 00:24:01.372 "adrfam": "IPv4", 00:24:01.372 "traddr": "192.168.100.8", 00:24:01.372 "trsvcid": "4421", 00:24:01.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.372 }, 00:24:01.372 "ctrlr_data": { 00:24:01.372 "cntlid": 3, 00:24:01.372 "vendor_id": "0x8086", 00:24:01.372 "model_number": "SPDK bdev Controller", 00:24:01.372 "serial_number": "00000000000000000000", 00:24:01.372 "firmware_revision": "24.01.1", 00:24:01.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.372 "oacs": { 00:24:01.372 "security": 0, 00:24:01.372 "format": 0, 00:24:01.372 "firmware": 0, 00:24:01.372 "ns_manage": 0 00:24:01.372 }, 00:24:01.372 "multi_ctrlr": true, 00:24:01.372 "ana_reporting": false 00:24:01.372 }, 00:24:01.372 "vs": { 00:24:01.372 "nvme_version": "1.3" 00:24:01.372 }, 00:24:01.372 "ns_data": { 00:24:01.372 "id": 1, 00:24:01.372 "can_share": true 00:24:01.372 } 00:24:01.372 } 00:24:01.372 ], 00:24:01.372 "mp_policy": "active_passive" 00:24:01.372 } 00:24:01.372 } 00:24:01.372 ] 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.372 11:51:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.372 11:51:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.372 11:51:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.372 11:51:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.tIA5qKL1Ld 00:24:01.372 11:51:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:01.372 11:51:31 -- host/async_init.sh@78 -- # nvmftestfini 00:24:01.372 11:51:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.372 11:51:31 -- nvmf/common.sh@116 -- # sync 00:24:01.372 11:51:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:01.372 11:51:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:01.372 11:51:31 -- nvmf/common.sh@119 -- # set +e 00:24:01.372 11:51:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.372 11:51:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:01.372 rmmod nvme_rdma 00:24:01.372 rmmod nvme_fabrics 00:24:01.372 11:51:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.373 11:51:31 -- nvmf/common.sh@123 -- # set -e 00:24:01.373 11:51:31 -- nvmf/common.sh@124 -- # return 0 00:24:01.373 11:51:31 -- nvmf/common.sh@477 -- # '[' -n 3836035 ']' 00:24:01.373 11:51:31 -- nvmf/common.sh@478 -- # killprocess 3836035 00:24:01.373 11:51:31 -- common/autotest_common.sh@936 -- # '[' -z 3836035 ']' 00:24:01.373 11:51:31 -- common/autotest_common.sh@940 -- # kill -0 3836035 00:24:01.373 11:51:31 -- common/autotest_common.sh@941 -- # uname 00:24:01.737 11:51:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.737 11:51:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3836035 00:24:01.737 11:51:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.737 11:51:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.737 11:51:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3836035' 00:24:01.737 killing process with pid 3836035 00:24:01.737 11:51:32 -- common/autotest_common.sh@955 -- # kill 3836035 00:24:01.737 11:51:32 -- common/autotest_common.sh@960 -- # wait 3836035 00:24:01.737 11:51:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:01.737 11:51:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:01.737 00:24:01.737 real 0m8.713s 00:24:01.737 user 0m3.937s 00:24:01.737 sys 0m5.538s 00:24:01.737 11:51:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.737 11:51:32 -- common/autotest_common.sh@10 -- # set +x 00:24:01.737 ************************************ 00:24:01.737 END TEST nvmf_async_init 00:24:01.737 ************************************ 00:24:02.021 11:51:32 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:02.021 11:51:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.021 11:51:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.021 11:51:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 ************************************ 00:24:02.021 START TEST dma 00:24:02.021 ************************************ 00:24:02.021 11:51:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:02.021 * Looking for test storage... 00:24:02.021 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:02.021 11:51:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.021 11:51:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.021 11:51:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.021 11:51:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.021 11:51:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.021 11:51:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.021 11:51:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.021 11:51:32 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.021 11:51:32 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.021 11:51:32 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.021 11:51:32 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.021 11:51:32 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.021 11:51:32 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.021 11:51:32 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.021 11:51:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.021 11:51:32 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.021 11:51:32 -- scripts/common.sh@344 -- # : 1 00:24:02.021 11:51:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.021 11:51:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.021 11:51:32 -- scripts/common.sh@364 -- # decimal 1 00:24:02.021 11:51:32 -- scripts/common.sh@352 -- # local d=1 00:24:02.021 11:51:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.021 11:51:32 -- scripts/common.sh@354 -- # echo 1 00:24:02.021 11:51:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.021 11:51:32 -- scripts/common.sh@365 -- # decimal 2 00:24:02.021 11:51:32 -- scripts/common.sh@352 -- # local d=2 00:24:02.021 11:51:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.021 11:51:32 -- scripts/common.sh@354 -- # echo 2 00:24:02.021 11:51:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.021 11:51:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.021 11:51:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.021 11:51:32 -- scripts/common.sh@367 -- # return 0 00:24:02.022 11:51:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.022 11:51:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.022 --rc genhtml_branch_coverage=1 00:24:02.022 --rc genhtml_function_coverage=1 00:24:02.022 --rc genhtml_legend=1 00:24:02.022 --rc geninfo_all_blocks=1 00:24:02.022 --rc geninfo_unexecuted_blocks=1 00:24:02.022 00:24:02.022 ' 00:24:02.022 11:51:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.022 --rc genhtml_branch_coverage=1 00:24:02.022 --rc genhtml_function_coverage=1 00:24:02.022 --rc genhtml_legend=1 00:24:02.022 --rc geninfo_all_blocks=1 00:24:02.022 --rc geninfo_unexecuted_blocks=1 00:24:02.022 00:24:02.022 ' 00:24:02.022 11:51:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.022 --rc genhtml_branch_coverage=1 00:24:02.022 --rc genhtml_function_coverage=1 00:24:02.022 --rc genhtml_legend=1 00:24:02.022 --rc geninfo_all_blocks=1 00:24:02.022 --rc geninfo_unexecuted_blocks=1 00:24:02.022 00:24:02.022 ' 00:24:02.022 11:51:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.022 --rc genhtml_branch_coverage=1 00:24:02.022 --rc genhtml_function_coverage=1 00:24:02.022 --rc genhtml_legend=1 00:24:02.022 --rc geninfo_all_blocks=1 00:24:02.022 --rc geninfo_unexecuted_blocks=1 00:24:02.022 00:24:02.022 ' 00:24:02.022 11:51:32 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.022 11:51:32 -- nvmf/common.sh@7 -- # uname -s 00:24:02.022 11:51:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.022 11:51:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.022 11:51:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.022 11:51:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.022 11:51:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.022 11:51:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.022 11:51:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.022 11:51:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.022 11:51:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.022 11:51:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.022 11:51:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:02.022 11:51:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:02.022 11:51:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.022 11:51:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.022 11:51:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.022 11:51:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.022 11:51:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.022 11:51:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.022 11:51:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.022 11:51:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.022 11:51:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.022 11:51:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.022 11:51:32 -- paths/export.sh@5 -- # export PATH 00:24:02.022 11:51:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.022 11:51:32 -- nvmf/common.sh@46 -- # : 0 00:24:02.022 11:51:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.022 11:51:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.022 11:51:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.022 11:51:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.022 11:51:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.022 11:51:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.022 11:51:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.022 11:51:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.022 11:51:32 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:02.022 11:51:32 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:02.022 11:51:32 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:02.022 11:51:32 -- host/dma.sh@18 -- # subsystem=0 00:24:02.022 11:51:32 -- host/dma.sh@93 -- # nvmftestinit 00:24:02.022 11:51:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:02.022 11:51:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.022 11:51:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.022 11:51:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.022 11:51:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.022 11:51:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.022 11:51:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.022 11:51:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.022 11:51:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.022 11:51:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.022 11:51:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.022 11:51:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.138 11:51:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.138 11:51:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.138 11:51:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.138 11:51:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.138 11:51:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.138 11:51:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.138 11:51:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.138 11:51:39 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.138 11:51:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.138 11:51:39 -- nvmf/common.sh@295 -- # e810=() 00:24:10.138 11:51:39 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.138 11:51:39 -- nvmf/common.sh@296 -- # x722=() 00:24:10.138 11:51:39 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.138 11:51:39 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.138 11:51:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.138 11:51:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.138 11:51:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.138 11:51:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.138 11:51:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:10.138 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:10.138 11:51:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.138 11:51:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.138 11:51:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:10.138 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:10.138 11:51:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.138 11:51:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.138 11:51:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.138 11:51:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.138 11:51:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.138 11:51:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.138 11:51:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:10.138 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:10.138 11:51:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.138 11:51:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.138 11:51:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.138 11:51:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.138 11:51:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:10.138 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:10.138 11:51:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.138 11:51:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.138 11:51:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:10.138 11:51:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:10.138 11:51:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:10.138 11:51:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:10.139 11:51:39 -- nvmf/common.sh@57 -- # uname 00:24:10.139 11:51:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:10.139 11:51:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:10.139 11:51:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:10.139 11:51:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:10.139 11:51:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:10.139 11:51:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:10.139 11:51:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:10.139 11:51:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:10.139 11:51:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:10.139 11:51:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:10.139 11:51:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:10.139 11:51:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.139 11:51:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.139 11:51:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.139 11:51:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.139 11:51:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.139 11:51:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.139 11:51:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.139 11:51:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.139 11:51:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.139 11:51:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:10.139 11:51:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:10.139 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.139 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:10.139 altname enp217s0f0np0 00:24:10.139 altname ens818f0np0 00:24:10.139 inet 192.168.100.8/24 scope global mlx_0_0 00:24:10.139 valid_lft forever preferred_lft forever 00:24:10.139 11:51:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.139 11:51:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.139 11:51:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:10.139 11:51:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:10.139 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.139 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:10.139 altname enp217s0f1np1 00:24:10.139 altname ens818f1np1 00:24:10.139 inet 192.168.100.9/24 scope global mlx_0_1 00:24:10.139 valid_lft forever preferred_lft forever 00:24:10.139 11:51:39 -- nvmf/common.sh@410 -- # return 0 00:24:10.139 11:51:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:10.139 11:51:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:10.139 11:51:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:10.139 11:51:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:10.139 11:51:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.139 11:51:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.139 11:51:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.139 11:51:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.139 11:51:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.139 11:51:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.139 11:51:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.139 11:51:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.139 11:51:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@104 -- # continue 2 00:24:10.139 11:51:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.139 11:51:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.139 11:51:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.139 11:51:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.139 11:51:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.139 11:51:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:10.139 192.168.100.9' 00:24:10.139 11:51:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:10.139 192.168.100.9' 00:24:10.139 11:51:39 -- nvmf/common.sh@445 -- # head -n 1 00:24:10.139 11:51:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:10.139 11:51:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:10.139 192.168.100.9' 00:24:10.139 11:51:39 -- nvmf/common.sh@446 -- # tail -n +2 00:24:10.139 11:51:39 -- nvmf/common.sh@446 -- # head -n 1 00:24:10.139 11:51:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:10.139 11:51:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:10.139 11:51:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:10.139 11:51:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:10.139 11:51:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:10.139 11:51:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:10.139 11:51:39 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:10.139 11:51:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:10.139 11:51:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.140 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 11:51:39 -- nvmf/common.sh@469 -- # nvmfpid=3839571 00:24:10.140 11:51:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:10.140 11:51:39 -- nvmf/common.sh@470 -- # waitforlisten 3839571 00:24:10.140 11:51:39 -- common/autotest_common.sh@829 -- # '[' -z 3839571 ']' 00:24:10.140 11:51:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.140 11:51:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.140 11:51:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.140 11:51:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.140 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 [2024-12-03 11:51:39.526492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:10.140 [2024-12-03 11:51:39.526537] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.140 [2024-12-03 11:51:39.593317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.140 [2024-12-03 11:51:39.667556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:10.140 [2024-12-03 11:51:39.667666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.140 [2024-12-03 11:51:39.667677] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.140 [2024-12-03 11:51:39.667686] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.140 [2024-12-03 11:51:39.667735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.140 [2024-12-03 11:51:39.667738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.140 11:51:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.140 11:51:40 -- common/autotest_common.sh@862 -- # return 0 00:24:10.140 11:51:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.140 11:51:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 11:51:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.140 11:51:40 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:10.140 11:51:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 [2024-12-03 11:51:40.405719] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23fea60/0x2402f50) succeed. 00:24:10.140 [2024-12-03 11:51:40.414838] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23fff60/0x24445f0) succeed. 00:24:10.140 11:51:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.140 11:51:40 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:10.140 11:51:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 Malloc0 00:24:10.140 11:51:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.140 11:51:40 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:10.140 11:51:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 11:51:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.140 11:51:40 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:10.140 11:51:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 11:51:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.140 11:51:40 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:10.140 11:51:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.140 11:51:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.140 [2024-12-03 11:51:40.559755] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:10.140 11:51:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.140 11:51:40 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:10.140 11:51:40 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:10.140 11:51:40 -- nvmf/common.sh@520 -- # config=() 00:24:10.140 11:51:40 -- nvmf/common.sh@520 -- # local subsystem config 00:24:10.140 11:51:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:10.140 11:51:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:10.140 { 00:24:10.140 "params": { 00:24:10.140 "name": "Nvme$subsystem", 00:24:10.140 "trtype": "$TEST_TRANSPORT", 00:24:10.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.140 "adrfam": "ipv4", 00:24:10.140 "trsvcid": "$NVMF_PORT", 00:24:10.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.140 "hdgst": ${hdgst:-false}, 00:24:10.140 "ddgst": ${ddgst:-false} 00:24:10.140 }, 00:24:10.140 "method": "bdev_nvme_attach_controller" 00:24:10.140 } 00:24:10.140 EOF 00:24:10.140 )") 00:24:10.140 11:51:40 -- nvmf/common.sh@542 -- # cat 00:24:10.140 11:51:40 -- nvmf/common.sh@544 -- # jq . 00:24:10.140 11:51:40 -- nvmf/common.sh@545 -- # IFS=, 00:24:10.140 11:51:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:10.140 "params": { 00:24:10.140 "name": "Nvme0", 00:24:10.140 "trtype": "rdma", 00:24:10.140 "traddr": "192.168.100.8", 00:24:10.140 "adrfam": "ipv4", 00:24:10.140 "trsvcid": "4420", 00:24:10.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.140 "hdgst": false, 00:24:10.140 "ddgst": false 00:24:10.140 }, 00:24:10.140 "method": "bdev_nvme_attach_controller" 00:24:10.140 }' 00:24:10.140 [2024-12-03 11:51:40.605909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:10.140 [2024-12-03 11:51:40.605955] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839855 ] 00:24:10.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.140 [2024-12-03 11:51:40.670062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.140 [2024-12-03 11:51:40.737944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.140 [2024-12-03 11:51:40.737947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.686 bdev Nvme0n1 reports 1 memory domains 00:24:16.686 bdev Nvme0n1 supports RDMA memory domain 00:24:16.686 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:16.686 ========================================================================== 00:24:16.686 Latency [us] 00:24:16.686 IOPS MiB/s Average min max 00:24:16.687 Core 2: 22242.20 86.88 718.61 237.67 8605.80 00:24:16.687 Core 3: 22415.15 87.56 713.07 230.25 8639.03 00:24:16.687 ========================================================================== 00:24:16.687 Total : 44657.35 174.44 715.83 230.25 8639.03 00:24:16.687 00:24:16.687 Total operations: 223349, translate 223349 pull_push 0 memzero 0 00:24:16.687 11:51:46 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:16.687 11:51:46 -- host/dma.sh@107 -- # gen_malloc_json 00:24:16.687 11:51:46 -- host/dma.sh@21 -- # jq . 00:24:16.687 [2024-12-03 11:51:46.195658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:16.687 [2024-12-03 11:51:46.195713] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840726 ] 00:24:16.687 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.687 [2024-12-03 11:51:46.262086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:16.687 [2024-12-03 11:51:46.325875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.687 [2024-12-03 11:51:46.325877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.944 bdev Malloc0 reports 1 memory domains 00:24:21.944 bdev Malloc0 doesn't support RDMA memory domain 00:24:21.944 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:21.944 ========================================================================== 00:24:21.944 Latency [us] 00:24:21.944 IOPS MiB/s Average min max 00:24:21.944 Core 2: 15044.27 58.77 1062.81 381.83 2381.16 00:24:21.944 Core 3: 15201.82 59.38 1051.77 412.93 1967.11 00:24:21.944 ========================================================================== 00:24:21.944 Total : 30246.09 118.15 1057.26 381.83 2381.16 00:24:21.944 00:24:21.944 Total operations: 151280, translate 0 pull_push 605120 memzero 0 00:24:21.944 11:51:51 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:21.944 11:51:51 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:21.944 11:51:51 -- host/dma.sh@48 -- # local subsystem=0 00:24:21.944 11:51:51 -- host/dma.sh@50 -- # jq . 00:24:21.944 Ignoring -M option 00:24:21.944 [2024-12-03 11:51:51.689722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:21.944 [2024-12-03 11:51:51.689777] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841750 ] 00:24:21.944 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.944 [2024-12-03 11:51:51.754359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.944 [2024-12-03 11:51:51.817470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.944 [2024-12-03 11:51:51.817472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.944 [2024-12-03 11:51:52.028620] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:27.204 [2024-12-03 11:51:57.058033] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:27.204 bdev c2574acf-4e8a-47bc-b2c7-27d5aef4c0a3 reports 1 memory domains 00:24:27.204 bdev c2574acf-4e8a-47bc-b2c7-27d5aef4c0a3 supports RDMA memory domain 00:24:27.204 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:27.204 ========================================================================== 00:24:27.204 Latency [us] 00:24:27.204 IOPS MiB/s Average min max 00:24:27.204 Core 2: 74379.81 290.55 214.29 73.01 1584.42 00:24:27.204 Core 3: 71164.60 277.99 223.95 57.66 1497.66 00:24:27.204 ========================================================================== 00:24:27.204 Total : 145544.42 568.53 219.01 57.66 1584.42 00:24:27.204 00:24:27.204 Total operations: 727810, translate 0 pull_push 0 memzero 727810 00:24:27.204 11:51:57 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:27.204 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.204 [2024-12-03 11:51:57.383080] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.101 Initializing NVMe Controllers 00:24:29.101 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:29.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:29.101 Initialization complete. Launching workers. 00:24:29.101 ======================================================== 00:24:29.101 Latency(us) 00:24:29.101 Device Information : IOPS MiB/s Average min max 00:24:29.101 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.89 7.85 7964.26 6982.26 8005.42 00:24:29.101 ======================================================== 00:24:29.101 Total : 2008.89 7.85 7964.26 6982.26 8005.42 00:24:29.101 00:24:29.101 11:51:59 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:29.101 11:51:59 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:29.101 11:51:59 -- host/dma.sh@48 -- # local subsystem=0 00:24:29.101 11:51:59 -- host/dma.sh@50 -- # jq . 00:24:29.101 [2024-12-03 11:51:59.711000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:29.101 [2024-12-03 11:51:59.711055] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843102 ] 00:24:29.359 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.359 [2024-12-03 11:51:59.775720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:29.359 [2024-12-03 11:51:59.843678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.359 [2024-12-03 11:51:59.843681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.616 [2024-12-03 11:52:00.046251] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:34.873 [2024-12-03 11:52:05.077186] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:34.873 bdev 01f110ba-6830-4efb-a39e-2ca1ca42b930 reports 1 memory domains 00:24:34.873 bdev 01f110ba-6830-4efb-a39e-2ca1ca42b930 supports RDMA memory domain 00:24:34.873 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:34.873 ========================================================================== 00:24:34.873 Latency [us] 00:24:34.873 IOPS MiB/s Average min max 00:24:34.873 Core 2: 19309.32 75.43 827.87 14.97 11411.36 00:24:34.873 Core 3: 19704.04 76.97 811.30 14.82 11712.61 00:24:34.873 ========================================================================== 00:24:34.873 Total : 39013.36 152.40 819.50 14.82 11712.61 00:24:34.873 00:24:34.873 Total operations: 195106, translate 194998 pull_push 0 memzero 108 00:24:34.873 11:52:05 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:34.873 11:52:05 -- host/dma.sh@120 -- # nvmftestfini 00:24:34.873 11:52:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.873 11:52:05 -- nvmf/common.sh@116 -- # sync 00:24:34.873 11:52:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:34.873 11:52:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:34.873 11:52:05 -- nvmf/common.sh@119 -- # set +e 00:24:34.873 11:52:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.873 11:52:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:34.873 rmmod nvme_rdma 00:24:34.873 rmmod nvme_fabrics 00:24:34.873 11:52:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:34.873 11:52:05 -- nvmf/common.sh@123 -- # set -e 00:24:34.873 11:52:05 -- nvmf/common.sh@124 -- # return 0 00:24:34.873 11:52:05 -- nvmf/common.sh@477 -- # '[' -n 3839571 ']' 00:24:34.873 11:52:05 -- nvmf/common.sh@478 -- # killprocess 3839571 00:24:34.873 11:52:05 -- common/autotest_common.sh@936 -- # '[' -z 3839571 ']' 00:24:34.873 11:52:05 -- common/autotest_common.sh@940 -- # kill -0 3839571 00:24:34.873 11:52:05 -- common/autotest_common.sh@941 -- # uname 00:24:34.873 11:52:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.873 11:52:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3839571 00:24:34.873 11:52:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:34.873 11:52:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:34.873 11:52:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3839571' 00:24:34.873 killing process with pid 3839571 00:24:34.873 11:52:05 -- common/autotest_common.sh@955 -- # kill 3839571 00:24:34.873 11:52:05 -- common/autotest_common.sh@960 -- # wait 3839571 00:24:35.438 11:52:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.438 11:52:05 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:35.438 00:24:35.438 real 0m33.412s 00:24:35.438 user 1m36.855s 00:24:35.438 sys 0m6.617s 00:24:35.438 11:52:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:35.438 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.438 ************************************ 00:24:35.438 END TEST dma 00:24:35.438 ************************************ 00:24:35.438 11:52:05 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:35.438 11:52:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.438 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.438 ************************************ 00:24:35.438 START TEST nvmf_identify 00:24:35.438 ************************************ 00:24:35.438 11:52:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:35.438 * Looking for test storage... 00:24:35.438 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:35.438 11:52:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:35.438 11:52:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:35.438 11:52:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:35.438 11:52:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:35.438 11:52:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:35.438 11:52:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:35.438 11:52:05 -- scripts/common.sh@335 -- # IFS=.-: 00:24:35.438 11:52:05 -- scripts/common.sh@335 -- # read -ra ver1 00:24:35.438 11:52:05 -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.438 11:52:05 -- scripts/common.sh@336 -- # read -ra ver2 00:24:35.438 11:52:05 -- scripts/common.sh@337 -- # local 'op=<' 00:24:35.438 11:52:05 -- scripts/common.sh@339 -- # ver1_l=2 00:24:35.438 11:52:05 -- scripts/common.sh@340 -- # ver2_l=1 00:24:35.438 11:52:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:35.438 11:52:05 -- scripts/common.sh@343 -- # case "$op" in 00:24:35.438 11:52:05 -- scripts/common.sh@344 -- # : 1 00:24:35.438 11:52:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:35.438 11:52:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.438 11:52:05 -- scripts/common.sh@364 -- # decimal 1 00:24:35.438 11:52:05 -- scripts/common.sh@352 -- # local d=1 00:24:35.438 11:52:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.438 11:52:05 -- scripts/common.sh@354 -- # echo 1 00:24:35.438 11:52:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:35.438 11:52:05 -- scripts/common.sh@365 -- # decimal 2 00:24:35.438 11:52:05 -- scripts/common.sh@352 -- # local d=2 00:24:35.438 11:52:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.438 11:52:05 -- scripts/common.sh@354 -- # echo 2 00:24:35.438 11:52:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:35.438 11:52:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:35.438 11:52:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:35.438 11:52:05 -- scripts/common.sh@367 -- # return 0 00:24:35.438 11:52:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.438 --rc genhtml_branch_coverage=1 00:24:35.438 --rc genhtml_function_coverage=1 00:24:35.438 --rc genhtml_legend=1 00:24:35.438 --rc geninfo_all_blocks=1 00:24:35.438 --rc geninfo_unexecuted_blocks=1 00:24:35.438 00:24:35.438 ' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.438 --rc genhtml_branch_coverage=1 00:24:35.438 --rc genhtml_function_coverage=1 00:24:35.438 --rc genhtml_legend=1 00:24:35.438 --rc geninfo_all_blocks=1 00:24:35.438 --rc geninfo_unexecuted_blocks=1 00:24:35.438 00:24:35.438 ' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.438 --rc genhtml_branch_coverage=1 00:24:35.438 --rc genhtml_function_coverage=1 00:24:35.438 --rc genhtml_legend=1 00:24:35.438 --rc geninfo_all_blocks=1 00:24:35.438 --rc geninfo_unexecuted_blocks=1 00:24:35.438 00:24:35.438 ' 00:24:35.438 11:52:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.438 --rc genhtml_branch_coverage=1 00:24:35.438 --rc genhtml_function_coverage=1 00:24:35.438 --rc genhtml_legend=1 00:24:35.438 --rc geninfo_all_blocks=1 00:24:35.438 --rc geninfo_unexecuted_blocks=1 00:24:35.438 00:24:35.438 ' 00:24:35.438 11:52:05 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.438 11:52:05 -- nvmf/common.sh@7 -- # uname -s 00:24:35.438 11:52:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.439 11:52:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.439 11:52:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.439 11:52:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.439 11:52:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.439 11:52:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.439 11:52:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.439 11:52:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.439 11:52:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.439 11:52:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.439 11:52:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:35.439 11:52:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:35.439 11:52:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.439 11:52:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.439 11:52:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.439 11:52:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:35.439 11:52:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.439 11:52:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.439 11:52:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.439 11:52:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.439 11:52:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.439 11:52:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.439 11:52:05 -- paths/export.sh@5 -- # export PATH 00:24:35.439 11:52:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.439 11:52:05 -- nvmf/common.sh@46 -- # : 0 00:24:35.439 11:52:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.439 11:52:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.439 11:52:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.439 11:52:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.439 11:52:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.439 11:52:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.439 11:52:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.439 11:52:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.439 11:52:06 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.439 11:52:06 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.439 11:52:06 -- host/identify.sh@14 -- # nvmftestinit 00:24:35.439 11:52:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:35.439 11:52:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.439 11:52:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.439 11:52:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.439 11:52:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.439 11:52:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.439 11:52:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.439 11:52:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.439 11:52:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:35.439 11:52:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:35.439 11:52:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:35.439 11:52:06 -- common/autotest_common.sh@10 -- # set +x 00:24:43.547 11:52:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.547 11:52:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.547 11:52:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.547 11:52:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.547 11:52:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.547 11:52:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.547 11:52:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.547 11:52:12 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.547 11:52:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.547 11:52:12 -- nvmf/common.sh@295 -- # e810=() 00:24:43.547 11:52:12 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.547 11:52:12 -- nvmf/common.sh@296 -- # x722=() 00:24:43.547 11:52:12 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.547 11:52:12 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.547 11:52:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.547 11:52:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.547 11:52:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.547 11:52:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:43.547 11:52:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:43.547 11:52:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:43.547 11:52:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:43.547 11:52:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:43.547 11:52:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.547 11:52:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:43.548 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:43.548 11:52:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:43.548 11:52:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:43.548 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:43.548 11:52:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:43.548 11:52:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.548 11:52:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.548 11:52:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:43.548 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.548 11:52:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.548 11:52:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.548 11:52:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:43.548 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.548 11:52:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.548 11:52:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:43.548 11:52:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:43.548 11:52:12 -- nvmf/common.sh@57 -- # uname 00:24:43.548 11:52:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:43.548 11:52:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:43.548 11:52:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:43.548 11:52:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:43.548 11:52:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:43.548 11:52:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:43.548 11:52:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:43.548 11:52:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:43.548 11:52:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:43.548 11:52:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:43.548 11:52:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:43.548 11:52:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:43.548 11:52:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:43.548 11:52:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:43.548 11:52:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:43.548 11:52:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@104 -- # continue 2 00:24:43.548 11:52:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@104 -- # continue 2 00:24:43.548 11:52:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:43.548 11:52:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.548 11:52:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:43.548 11:52:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:43.548 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:43.548 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:43.548 altname enp217s0f0np0 00:24:43.548 altname ens818f0np0 00:24:43.548 inet 192.168.100.8/24 scope global mlx_0_0 00:24:43.548 valid_lft forever preferred_lft forever 00:24:43.548 11:52:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:43.548 11:52:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.548 11:52:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:43.548 11:52:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:43.548 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:43.548 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:43.548 altname enp217s0f1np1 00:24:43.548 altname ens818f1np1 00:24:43.548 inet 192.168.100.9/24 scope global mlx_0_1 00:24:43.548 valid_lft forever preferred_lft forever 00:24:43.548 11:52:12 -- nvmf/common.sh@410 -- # return 0 00:24:43.548 11:52:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:43.548 11:52:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:43.548 11:52:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:43.548 11:52:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:43.548 11:52:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:43.548 11:52:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:43.548 11:52:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:43.548 11:52:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:43.548 11:52:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:43.548 11:52:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@104 -- # continue 2 00:24:43.548 11:52:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:43.548 11:52:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:43.548 11:52:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@104 -- # continue 2 00:24:43.548 11:52:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:43.548 11:52:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.548 11:52:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:43.548 11:52:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:43.548 11:52:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:43.548 11:52:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:43.548 192.168.100.9' 00:24:43.548 11:52:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:43.548 192.168.100.9' 00:24:43.548 11:52:12 -- nvmf/common.sh@445 -- # head -n 1 00:24:43.548 11:52:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:43.548 11:52:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:43.548 192.168.100.9' 00:24:43.548 11:52:12 -- nvmf/common.sh@446 -- # tail -n +2 00:24:43.548 11:52:12 -- nvmf/common.sh@446 -- # head -n 1 00:24:43.548 11:52:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:43.548 11:52:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:43.548 11:52:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:43.548 11:52:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:43.548 11:52:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:43.548 11:52:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:43.548 11:52:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:43.548 11:52:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.548 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:24:43.548 11:52:12 -- host/identify.sh@19 -- # nvmfpid=3847358 00:24:43.548 11:52:12 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.548 11:52:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.548 11:52:12 -- host/identify.sh@23 -- # waitforlisten 3847358 00:24:43.549 11:52:12 -- common/autotest_common.sh@829 -- # '[' -z 3847358 ']' 00:24:43.549 11:52:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.549 11:52:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.549 11:52:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.549 11:52:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.549 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 [2024-12-03 11:52:12.995777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:43.549 [2024-12-03 11:52:12.995831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.549 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.549 [2024-12-03 11:52:13.066456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.549 [2024-12-03 11:52:13.141527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:43.549 [2024-12-03 11:52:13.141636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.549 [2024-12-03 11:52:13.141646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.549 [2024-12-03 11:52:13.141656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.549 [2024-12-03 11:52:13.141701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.549 [2024-12-03 11:52:13.141794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.549 [2024-12-03 11:52:13.141854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.549 [2024-12-03 11:52:13.141856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.549 11:52:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.549 11:52:13 -- common/autotest_common.sh@862 -- # return 0 00:24:43.549 11:52:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:43.549 11:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:13 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 [2024-12-03 11:52:13.861646] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa4e090/0xa52580) succeed. 00:24:43.549 [2024-12-03 11:52:13.871129] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa4f680/0xa93c20) succeed. 00:24:43.549 11:52:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:43.549 11:52:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.549 11:52:13 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 11:52:14 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 Malloc0 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 [2024-12-03 11:52:14.084694] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:43.549 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.549 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:43.549 [2024-12-03 11:52:14.100365] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:43.549 [ 00:24:43.549 { 00:24:43.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:43.549 "subtype": "Discovery", 00:24:43.549 "listen_addresses": [ 00:24:43.549 { 00:24:43.549 "transport": "RDMA", 00:24:43.549 "trtype": "RDMA", 00:24:43.549 "adrfam": "IPv4", 00:24:43.549 "traddr": "192.168.100.8", 00:24:43.549 "trsvcid": "4420" 00:24:43.549 } 00:24:43.549 ], 00:24:43.549 "allow_any_host": true, 00:24:43.549 "hosts": [] 00:24:43.549 }, 00:24:43.549 { 00:24:43.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.549 "subtype": "NVMe", 00:24:43.549 "listen_addresses": [ 00:24:43.549 { 00:24:43.549 "transport": "RDMA", 00:24:43.549 "trtype": "RDMA", 00:24:43.549 "adrfam": "IPv4", 00:24:43.549 "traddr": "192.168.100.8", 00:24:43.549 "trsvcid": "4420" 00:24:43.549 } 00:24:43.549 ], 00:24:43.549 "allow_any_host": true, 00:24:43.549 "hosts": [], 00:24:43.549 "serial_number": "SPDK00000000000001", 00:24:43.549 "model_number": "SPDK bdev Controller", 00:24:43.549 "max_namespaces": 32, 00:24:43.549 "min_cntlid": 1, 00:24:43.549 "max_cntlid": 65519, 00:24:43.549 "namespaces": [ 00:24:43.549 { 00:24:43.549 "nsid": 1, 00:24:43.549 "bdev_name": "Malloc0", 00:24:43.549 "name": "Malloc0", 00:24:43.549 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:43.549 "eui64": "ABCDEF0123456789", 00:24:43.549 "uuid": "fd9e9963-e67b-4549-b0a7-50c329ccfe79" 00:24:43.549 } 00:24:43.549 ] 00:24:43.549 } 00:24:43.549 ] 00:24:43.549 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.549 11:52:14 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:43.549 [2024-12-03 11:52:14.145398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:43.549 [2024-12-03 11:52:14.145467] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847646 ] 00:24:43.549 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.814 [2024-12-03 11:52:14.194229] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:43.814 [2024-12-03 11:52:14.194301] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:43.814 [2024-12-03 11:52:14.194328] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:43.814 [2024-12-03 11:52:14.194333] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:43.814 [2024-12-03 11:52:14.194366] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:43.814 [2024-12-03 11:52:14.212624] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:43.814 [2024-12-03 11:52:14.222702] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:43.814 [2024-12-03 11:52:14.222713] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:43.814 [2024-12-03 11:52:14.222722] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222729] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222735] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222742] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222748] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222754] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222760] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222766] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222773] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222779] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222785] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222792] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222798] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222804] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222811] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222817] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222823] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222829] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222835] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222842] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222848] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222857] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222863] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222869] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222875] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222882] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222888] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222894] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222901] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222907] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222913] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.814 [2024-12-03 11:52:14.222919] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:43.814 [2024-12-03 11:52:14.222925] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:43.814 [2024-12-03 11:52:14.222930] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:43.814 [2024-12-03 11:52:14.222948] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.222960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:43.815 [2024-12-03 11:52:14.228116] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228133] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228140] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:43.815 [2024-12-03 11:52:14.228147] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228154] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228166] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228196] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228209] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228222] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228230] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228255] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228268] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228275] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228282] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228289] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228317] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228329] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228336] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228344] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228370] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228382] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:43.815 [2024-12-03 11:52:14.228388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228394] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228508] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:43.815 [2024-12-03 11:52:14.228514] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228524] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228548] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228560] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:43.815 [2024-12-03 11:52:14.228566] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228576] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.815 [2024-12-03 11:52:14.228600] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228612] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:43.815 [2024-12-03 11:52:14.228618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:43.815 [2024-12-03 11:52:14.228624] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228631] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:43.815 [2024-12-03 11:52:14.228640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:43.815 [2024-12-03 11:52:14.228650] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:43.815 [2024-12-03 11:52:14.228694] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.815 [2024-12-03 11:52:14.228700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:43.815 [2024-12-03 11:52:14.228709] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:43.815 [2024-12-03 11:52:14.228715] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:43.815 [2024-12-03 11:52:14.228721] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:43.815 [2024-12-03 11:52:14.228728] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:43.815 [2024-12-03 11:52:14.228734] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:43.815 [2024-12-03 11:52:14.228740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:43.815 [2024-12-03 11:52:14.228746] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.815 [2024-12-03 11:52:14.228756] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:43.815 [2024-12-03 11:52:14.228764] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.816 [2024-12-03 11:52:14.228791] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.228798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.228806] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.816 [2024-12-03 11:52:14.228822] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.816 [2024-12-03 11:52:14.228837] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.816 [2024-12-03 11:52:14.228851] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.816 [2024-12-03 11:52:14.228864] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:43.816 [2024-12-03 11:52:14.228870] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228881] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:43.816 [2024-12-03 11:52:14.228889] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.816 [2024-12-03 11:52:14.228918] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.228924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.228931] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:43.816 [2024-12-03 11:52:14.228937] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:43.816 [2024-12-03 11:52:14.228943] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.228960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:43.816 [2024-12-03 11:52:14.228986] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.228991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.228999] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229009] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:43.816 [2024-12-03 11:52:14.229030] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183d00 00:24:43.816 [2024-12-03 11:52:14.229047] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.816 [2024-12-03 11:52:14.229070] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.229077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.229089] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:24:43.816 [2024-12-03 11:52:14.229104] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229121] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.229126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.229133] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229139] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.229145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.229155] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:24:43.816 [2024-12-03 11:52:14.229169] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.816 [2024-12-03 11:52:14.229186] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.816 [2024-12-03 11:52:14.229191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:43.816 [2024-12-03 11:52:14.229203] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.816 ===================================================== 00:24:43.816 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:43.816 ===================================================== 00:24:43.816 Controller Capabilities/Features 00:24:43.816 ================================ 00:24:43.816 Vendor ID: 0000 00:24:43.816 Subsystem Vendor ID: 0000 00:24:43.816 Serial Number: .................... 00:24:43.816 Model Number: ........................................ 00:24:43.816 Firmware Version: 24.01.1 00:24:43.816 Recommended Arb Burst: 0 00:24:43.816 IEEE OUI Identifier: 00 00 00 00:24:43.816 Multi-path I/O 00:24:43.816 May have multiple subsystem ports: No 00:24:43.816 May have multiple controllers: No 00:24:43.816 Associated with SR-IOV VF: No 00:24:43.816 Max Data Transfer Size: 131072 00:24:43.816 Max Number of Namespaces: 0 00:24:43.816 Max Number of I/O Queues: 1024 00:24:43.816 NVMe Specification Version (VS): 1.3 00:24:43.816 NVMe Specification Version (Identify): 1.3 00:24:43.816 Maximum Queue Entries: 128 00:24:43.816 Contiguous Queues Required: Yes 00:24:43.816 Arbitration Mechanisms Supported 00:24:43.816 Weighted Round Robin: Not Supported 00:24:43.816 Vendor Specific: Not Supported 00:24:43.816 Reset Timeout: 15000 ms 00:24:43.816 Doorbell Stride: 4 bytes 00:24:43.816 NVM Subsystem Reset: Not Supported 00:24:43.816 Command Sets Supported 00:24:43.816 NVM Command Set: Supported 00:24:43.816 Boot Partition: Not Supported 00:24:43.816 Memory Page Size Minimum: 4096 bytes 00:24:43.816 Memory Page Size Maximum: 4096 bytes 00:24:43.816 Persistent Memory Region: Not Supported 00:24:43.816 Optional Asynchronous Events Supported 00:24:43.816 Namespace Attribute Notices: Not Supported 00:24:43.816 Firmware Activation Notices: Not Supported 00:24:43.816 ANA Change Notices: Not Supported 00:24:43.817 PLE Aggregate Log Change Notices: Not Supported 00:24:43.817 LBA Status Info Alert Notices: Not Supported 00:24:43.817 EGE Aggregate Log Change Notices: Not Supported 00:24:43.817 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.817 Zone Descriptor Change Notices: Not Supported 00:24:43.817 Discovery Log Change Notices: Supported 00:24:43.817 Controller Attributes 00:24:43.817 128-bit Host Identifier: Not Supported 00:24:43.817 Non-Operational Permissive Mode: Not Supported 00:24:43.817 NVM Sets: Not Supported 00:24:43.817 Read Recovery Levels: Not Supported 00:24:43.817 Endurance Groups: Not Supported 00:24:43.817 Predictable Latency Mode: Not Supported 00:24:43.817 Traffic Based Keep ALive: Not Supported 00:24:43.817 Namespace Granularity: Not Supported 00:24:43.817 SQ Associations: Not Supported 00:24:43.817 UUID List: Not Supported 00:24:43.817 Multi-Domain Subsystem: Not Supported 00:24:43.817 Fixed Capacity Management: Not Supported 00:24:43.817 Variable Capacity Management: Not Supported 00:24:43.817 Delete Endurance Group: Not Supported 00:24:43.817 Delete NVM Set: Not Supported 00:24:43.817 Extended LBA Formats Supported: Not Supported 00:24:43.817 Flexible Data Placement Supported: Not Supported 00:24:43.817 00:24:43.817 Controller Memory Buffer Support 00:24:43.817 ================================ 00:24:43.817 Supported: No 00:24:43.817 00:24:43.817 Persistent Memory Region Support 00:24:43.817 ================================ 00:24:43.817 Supported: No 00:24:43.817 00:24:43.817 Admin Command Set Attributes 00:24:43.817 ============================ 00:24:43.817 Security Send/Receive: Not Supported 00:24:43.817 Format NVM: Not Supported 00:24:43.817 Firmware Activate/Download: Not Supported 00:24:43.817 Namespace Management: Not Supported 00:24:43.817 Device Self-Test: Not Supported 00:24:43.817 Directives: Not Supported 00:24:43.817 NVMe-MI: Not Supported 00:24:43.817 Virtualization Management: Not Supported 00:24:43.817 Doorbell Buffer Config: Not Supported 00:24:43.817 Get LBA Status Capability: Not Supported 00:24:43.817 Command & Feature Lockdown Capability: Not Supported 00:24:43.817 Abort Command Limit: 1 00:24:43.817 Async Event Request Limit: 4 00:24:43.817 Number of Firmware Slots: N/A 00:24:43.817 Firmware Slot 1 Read-Only: N/A 00:24:43.817 Firmware Activation Without Reset: N/A 00:24:43.817 Multiple Update Detection Support: N/A 00:24:43.817 Firmware Update Granularity: No Information Provided 00:24:43.817 Per-Namespace SMART Log: No 00:24:43.817 Asymmetric Namespace Access Log Page: Not Supported 00:24:43.817 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:43.817 Command Effects Log Page: Not Supported 00:24:43.817 Get Log Page Extended Data: Supported 00:24:43.817 Telemetry Log Pages: Not Supported 00:24:43.817 Persistent Event Log Pages: Not Supported 00:24:43.817 Supported Log Pages Log Page: May Support 00:24:43.817 Commands Supported & Effects Log Page: Not Supported 00:24:43.817 Feature Identifiers & Effects Log Page:May Support 00:24:43.817 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.817 Data Area 4 for Telemetry Log: Not Supported 00:24:43.817 Error Log Page Entries Supported: 128 00:24:43.817 Keep Alive: Not Supported 00:24:43.817 00:24:43.817 NVM Command Set Attributes 00:24:43.817 ========================== 00:24:43.817 Submission Queue Entry Size 00:24:43.817 Max: 1 00:24:43.817 Min: 1 00:24:43.817 Completion Queue Entry Size 00:24:43.817 Max: 1 00:24:43.817 Min: 1 00:24:43.817 Number of Namespaces: 0 00:24:43.817 Compare Command: Not Supported 00:24:43.817 Write Uncorrectable Command: Not Supported 00:24:43.817 Dataset Management Command: Not Supported 00:24:43.817 Write Zeroes Command: Not Supported 00:24:43.817 Set Features Save Field: Not Supported 00:24:43.817 Reservations: Not Supported 00:24:43.817 Timestamp: Not Supported 00:24:43.817 Copy: Not Supported 00:24:43.817 Volatile Write Cache: Not Present 00:24:43.817 Atomic Write Unit (Normal): 1 00:24:43.817 Atomic Write Unit (PFail): 1 00:24:43.817 Atomic Compare & Write Unit: 1 00:24:43.817 Fused Compare & Write: Supported 00:24:43.817 Scatter-Gather List 00:24:43.817 SGL Command Set: Supported 00:24:43.817 SGL Keyed: Supported 00:24:43.817 SGL Bit Bucket Descriptor: Not Supported 00:24:43.817 SGL Metadata Pointer: Not Supported 00:24:43.817 Oversized SGL: Not Supported 00:24:43.817 SGL Metadata Address: Not Supported 00:24:43.817 SGL Offset: Supported 00:24:43.817 Transport SGL Data Block: Not Supported 00:24:43.817 Replay Protected Memory Block: Not Supported 00:24:43.817 00:24:43.817 Firmware Slot Information 00:24:43.817 ========================= 00:24:43.817 Active slot: 0 00:24:43.817 00:24:43.817 00:24:43.817 Error Log 00:24:43.817 ========= 00:24:43.817 00:24:43.817 Active Namespaces 00:24:43.817 ================= 00:24:43.817 Discovery Log Page 00:24:43.817 ================== 00:24:43.817 Generation Counter: 2 00:24:43.817 Number of Records: 2 00:24:43.817 Record Format: 0 00:24:43.817 00:24:43.817 Discovery Log Entry 0 00:24:43.817 ---------------------- 00:24:43.817 Transport Type: 1 (RDMA) 00:24:43.817 Address Family: 1 (IPv4) 00:24:43.817 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:43.817 Entry Flags: 00:24:43.817 Duplicate Returned Information: 1 00:24:43.817 Explicit Persistent Connection Support for Discovery: 1 00:24:43.817 Transport Requirements: 00:24:43.817 Secure Channel: Not Required 00:24:43.817 Port ID: 0 (0x0000) 00:24:43.817 Controller ID: 65535 (0xffff) 00:24:43.817 Admin Max SQ Size: 128 00:24:43.817 Transport Service Identifier: 4420 00:24:43.817 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:43.817 Transport Address: 192.168.100.8 00:24:43.817 Transport Specific Address Subtype - RDMA 00:24:43.817 RDMA QP Service Type: 1 (Reliable Connected) 00:24:43.817 RDMA Provider Type: 1 (No provider specified) 00:24:43.817 RDMA CM Service: 1 (RDMA_CM) 00:24:43.817 Discovery Log Entry 1 00:24:43.817 ---------------------- 00:24:43.817 Transport Type: 1 (RDMA) 00:24:43.817 Address Family: 1 (IPv4) 00:24:43.817 Subsystem Type: 2 (NVM Subsystem) 00:24:43.818 Entry Flags: 00:24:43.818 Duplicate Returned Information: 0 00:24:43.818 Explicit Persistent Connection Support for Discovery: 0 00:24:43.818 Transport Requirements: 00:24:43.818 Secure Channel: Not Required 00:24:43.818 Port ID: 0 (0x0000) 00:24:43.818 Controller ID: 65535 (0xffff) 00:24:43.818 Admin Max SQ Size: [2024-12-03 11:52:14.229276] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:43.818 [2024-12-03 11:52:14.229286] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43572 doesn't match qid 00:24:43.818 [2024-12-03 11:52:14.229301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32613 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229308] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43572 doesn't match qid 00:24:43.818 [2024-12-03 11:52:14.229316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32613 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229323] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43572 doesn't match qid 00:24:43.818 [2024-12-03 11:52:14.229331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32613 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229338] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43572 doesn't match qid 00:24:43.818 [2024-12-03 11:52:14.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32613 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229354] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229378] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229393] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229409] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229426] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229439] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:43.818 [2024-12-03 11:52:14.229445] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:43.818 [2024-12-03 11:52:14.229451] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229459] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229485] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229498] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229508] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229531] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229544] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229553] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229580] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229593] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229603] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229632] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229644] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229653] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229677] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229691] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229728] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229743] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229752] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229774] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229786] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229796] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.818 [2024-12-03 11:52:14.229825] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.818 [2024-12-03 11:52:14.229831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:43.818 [2024-12-03 11:52:14.229838] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.818 [2024-12-03 11:52:14.229847] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.229873] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.229879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.229885] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229894] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.229917] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.229923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.229930] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229939] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.229969] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.229975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.229982] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229990] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.229998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230018] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230031] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230039] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230063] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230076] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230084] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230117] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230129] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230138] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230166] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230178] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230187] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230217] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230230] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230239] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230263] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230275] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230284] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230308] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230320] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230329] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230357] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230369] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230378] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230404] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230416] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230425] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230456] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230469] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.819 [2024-12-03 11:52:14.230485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.819 [2024-12-03 11:52:14.230507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.819 [2024-12-03 11:52:14.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.819 [2024-12-03 11:52:14.230519] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230528] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230552] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230573] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230597] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230609] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230618] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230640] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230652] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230661] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230683] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230704] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230726] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230738] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230747] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230778] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230791] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230799] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230830] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230842] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230851] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230881] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230893] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230902] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230926] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230938] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230947] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.230971] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.230977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.230983] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.230992] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.820 [2024-12-03 11:52:14.231000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.820 [2024-12-03 11:52:14.231019] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.820 [2024-12-03 11:52:14.231025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:43.820 [2024-12-03 11:52:14.231031] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231040] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231076] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231087] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231119] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231131] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231140] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231166] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231178] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231187] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231221] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231233] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231241] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231269] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231281] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231290] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231320] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231332] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231341] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231363] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231375] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231386] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231410] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231422] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231431] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231456] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231469] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231501] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231514] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231523] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231547] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231559] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231568] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231592] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231604] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231613] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231644] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231657] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231666] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231698] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.821 [2024-12-03 11:52:14.231703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:43.821 [2024-12-03 11:52:14.231710] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231719] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.821 [2024-12-03 11:52:14.231727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.821 [2024-12-03 11:52:14.231746] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.231759] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231767] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.231791] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.231803] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231812] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.231841] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.231854] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231862] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.231890] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.231903] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231911] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.231934] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.231947] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231956] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.231964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.231988] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.231994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.232000] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.232009] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.232017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.232033] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.232038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.232045] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.232054] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.232062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.232081] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.232087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.232094] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.232102] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.236116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.822 [2024-12-03 11:52:14.236137] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.822 [2024-12-03 11:52:14.236143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:24:43.822 [2024-12-03 11:52:14.236149] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.236156] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:43.822 128 00:24:43.822 Transport Service Identifier: 4420 00:24:43.822 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:43.822 Transport Address: 192.168.100.8 00:24:43.822 Transport Specific Address Subtype - RDMA 00:24:43.822 RDMA QP Service Type: 1 (Reliable Connected) 00:24:43.822 RDMA Provider Type: 1 (No provider specified) 00:24:43.822 RDMA CM Service: 1 (RDMA_CM) 00:24:43.822 11:52:14 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:43.822 [2024-12-03 11:52:14.308770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:43.822 [2024-12-03 11:52:14.308809] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847653 ] 00:24:43.822 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.822 [2024-12-03 11:52:14.354770] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:43.822 [2024-12-03 11:52:14.354834] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:43.822 [2024-12-03 11:52:14.354857] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:43.822 [2024-12-03 11:52:14.354862] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:43.822 [2024-12-03 11:52:14.354884] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:43.822 [2024-12-03 11:52:14.374573] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:43.822 [2024-12-03 11:52:14.384639] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:43.822 [2024-12-03 11:52:14.384650] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:43.822 [2024-12-03 11:52:14.384657] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384664] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384670] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384676] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384682] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384688] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384695] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384701] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.822 [2024-12-03 11:52:14.384707] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384713] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384719] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384725] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384731] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384737] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384743] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384749] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384755] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384761] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384767] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384773] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384779] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384788] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384794] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384800] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384806] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384812] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384818] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384824] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384830] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384836] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384842] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384848] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:43.823 [2024-12-03 11:52:14.384853] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:43.823 [2024-12-03 11:52:14.384857] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:43.823 [2024-12-03 11:52:14.384871] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.384882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:43.823 [2024-12-03 11:52:14.390113] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390129] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390136] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:43.823 [2024-12-03 11:52:14.390143] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:43.823 [2024-12-03 11:52:14.390149] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:43.823 [2024-12-03 11:52:14.390160] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.823 [2024-12-03 11:52:14.390189] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390201] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:43.823 [2024-12-03 11:52:14.390207] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390213] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:43.823 [2024-12-03 11:52:14.390221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.823 [2024-12-03 11:52:14.390247] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390259] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:43.823 [2024-12-03 11:52:14.390265] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:43.823 [2024-12-03 11:52:14.390279] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.823 [2024-12-03 11:52:14.390306] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390318] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:43.823 [2024-12-03 11:52:14.390324] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390332] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.823 [2024-12-03 11:52:14.390356] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390368] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:43.823 [2024-12-03 11:52:14.390373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:43.823 [2024-12-03 11:52:14.390379] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:43.823 [2024-12-03 11:52:14.390492] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:43.823 [2024-12-03 11:52:14.390497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:43.823 [2024-12-03 11:52:14.390505] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.823 [2024-12-03 11:52:14.390513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.823 [2024-12-03 11:52:14.390529] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.823 [2024-12-03 11:52:14.390534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:43.823 [2024-12-03 11:52:14.390540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:43.824 [2024-12-03 11:52:14.390546] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390554] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.824 [2024-12-03 11:52:14.390580] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.390585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.390591] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:43.824 [2024-12-03 11:52:14.390597] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390603] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390610] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:43.824 [2024-12-03 11:52:14.390620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390629] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:43.824 [2024-12-03 11:52:14.390676] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.390682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.390690] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:43.824 [2024-12-03 11:52:14.390696] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:43.824 [2024-12-03 11:52:14.390701] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:43.824 [2024-12-03 11:52:14.390706] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:43.824 [2024-12-03 11:52:14.390712] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:43.824 [2024-12-03 11:52:14.390718] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390723] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390732] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390740] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.824 [2024-12-03 11:52:14.390767] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.390773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.390781] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.824 [2024-12-03 11:52:14.390795] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.824 [2024-12-03 11:52:14.390810] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.824 [2024-12-03 11:52:14.390823] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.824 [2024-12-03 11:52:14.390836] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390842] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390859] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.824 [2024-12-03 11:52:14.390882] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.390888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.390894] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:43.824 [2024-12-03 11:52:14.390900] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390905] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.390928] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.390936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.824 [2024-12-03 11:52:14.390957] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.390963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.391011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.391017] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.391025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.391033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.391040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:24:43.824 [2024-12-03 11:52:14.391063] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.391081] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:43.824 [2024-12-03 11:52:14.391096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.391102] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.391115] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.391124] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.824 [2024-12-03 11:52:14.391131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:43.824 [2024-12-03 11:52:14.391159] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.824 [2024-12-03 11:52:14.391164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:43.824 [2024-12-03 11:52:14.391177] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:43.824 [2024-12-03 11:52:14.391183] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391191] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391199] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:43.825 [2024-12-03 11:52:14.391228] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391247] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391281] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:43.825 [2024-12-03 11:52:14.391287] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:43.825 [2024-12-03 11:52:14.391293] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:43.825 [2024-12-03 11:52:14.391307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.825 [2024-12-03 11:52:14.391323] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.825 [2024-12-03 11:52:14.391340] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391352] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391358] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391370] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391379] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.825 [2024-12-03 11:52:14.391403] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391414] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391423] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.825 [2024-12-03 11:52:14.391447] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391459] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391468] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.825 [2024-12-03 11:52:14.391492] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391503] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:24:43.825 [2024-12-03 11:52:14.391530] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183d00 00:24:43.825 [2024-12-03 11:52:14.391545] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:24:43.825 [2024-12-03 11:52:14.391562] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183d00 00:24:43.825 [2024-12-03 11:52:14.391578] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391597] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391604] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391618] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391624] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391636] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.825 [2024-12-03 11:52:14.391642] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.825 [2024-12-03 11:52:14.391648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:43.825 [2024-12-03 11:52:14.391658] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.825 ===================================================== 00:24:43.825 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.825 ===================================================== 00:24:43.825 Controller Capabilities/Features 00:24:43.825 ================================ 00:24:43.825 Vendor ID: 8086 00:24:43.825 Subsystem Vendor ID: 8086 00:24:43.825 Serial Number: SPDK00000000000001 00:24:43.825 Model Number: SPDK bdev Controller 00:24:43.825 Firmware Version: 24.01.1 00:24:43.825 Recommended Arb Burst: 6 00:24:43.825 IEEE OUI Identifier: e4 d2 5c 00:24:43.825 Multi-path I/O 00:24:43.825 May have multiple subsystem ports: Yes 00:24:43.825 May have multiple controllers: Yes 00:24:43.825 Associated with SR-IOV VF: No 00:24:43.825 Max Data Transfer Size: 131072 00:24:43.825 Max Number of Namespaces: 32 00:24:43.825 Max Number of I/O Queues: 127 00:24:43.826 NVMe Specification Version (VS): 1.3 00:24:43.826 NVMe Specification Version (Identify): 1.3 00:24:43.826 Maximum Queue Entries: 128 00:24:43.826 Contiguous Queues Required: Yes 00:24:43.826 Arbitration Mechanisms Supported 00:24:43.826 Weighted Round Robin: Not Supported 00:24:43.826 Vendor Specific: Not Supported 00:24:43.826 Reset Timeout: 15000 ms 00:24:43.826 Doorbell Stride: 4 bytes 00:24:43.826 NVM Subsystem Reset: Not Supported 00:24:43.826 Command Sets Supported 00:24:43.826 NVM Command Set: Supported 00:24:43.826 Boot Partition: Not Supported 00:24:43.826 Memory Page Size Minimum: 4096 bytes 00:24:43.826 Memory Page Size Maximum: 4096 bytes 00:24:43.826 Persistent Memory Region: Not Supported 00:24:43.826 Optional Asynchronous Events Supported 00:24:43.826 Namespace Attribute Notices: Supported 00:24:43.826 Firmware Activation Notices: Not Supported 00:24:43.826 ANA Change Notices: Not Supported 00:24:43.826 PLE Aggregate Log Change Notices: Not Supported 00:24:43.826 LBA Status Info Alert Notices: Not Supported 00:24:43.826 EGE Aggregate Log Change Notices: Not Supported 00:24:43.826 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.826 Zone Descriptor Change Notices: Not Supported 00:24:43.826 Discovery Log Change Notices: Not Supported 00:24:43.826 Controller Attributes 00:24:43.826 128-bit Host Identifier: Supported 00:24:43.826 Non-Operational Permissive Mode: Not Supported 00:24:43.826 NVM Sets: Not Supported 00:24:43.826 Read Recovery Levels: Not Supported 00:24:43.826 Endurance Groups: Not Supported 00:24:43.826 Predictable Latency Mode: Not Supported 00:24:43.826 Traffic Based Keep ALive: Not Supported 00:24:43.826 Namespace Granularity: Not Supported 00:24:43.826 SQ Associations: Not Supported 00:24:43.826 UUID List: Not Supported 00:24:43.826 Multi-Domain Subsystem: Not Supported 00:24:43.826 Fixed Capacity Management: Not Supported 00:24:43.826 Variable Capacity Management: Not Supported 00:24:43.826 Delete Endurance Group: Not Supported 00:24:43.826 Delete NVM Set: Not Supported 00:24:43.826 Extended LBA Formats Supported: Not Supported 00:24:43.826 Flexible Data Placement Supported: Not Supported 00:24:43.826 00:24:43.826 Controller Memory Buffer Support 00:24:43.826 ================================ 00:24:43.826 Supported: No 00:24:43.826 00:24:43.826 Persistent Memory Region Support 00:24:43.826 ================================ 00:24:43.826 Supported: No 00:24:43.826 00:24:43.826 Admin Command Set Attributes 00:24:43.826 ============================ 00:24:43.826 Security Send/Receive: Not Supported 00:24:43.826 Format NVM: Not Supported 00:24:43.826 Firmware Activate/Download: Not Supported 00:24:43.826 Namespace Management: Not Supported 00:24:43.826 Device Self-Test: Not Supported 00:24:43.826 Directives: Not Supported 00:24:43.826 NVMe-MI: Not Supported 00:24:43.826 Virtualization Management: Not Supported 00:24:43.826 Doorbell Buffer Config: Not Supported 00:24:43.826 Get LBA Status Capability: Not Supported 00:24:43.826 Command & Feature Lockdown Capability: Not Supported 00:24:43.826 Abort Command Limit: 4 00:24:43.826 Async Event Request Limit: 4 00:24:43.826 Number of Firmware Slots: N/A 00:24:43.826 Firmware Slot 1 Read-Only: N/A 00:24:43.826 Firmware Activation Without Reset: N/A 00:24:43.826 Multiple Update Detection Support: N/A 00:24:43.826 Firmware Update Granularity: No Information Provided 00:24:43.826 Per-Namespace SMART Log: No 00:24:43.826 Asymmetric Namespace Access Log Page: Not Supported 00:24:43.826 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:43.826 Command Effects Log Page: Supported 00:24:43.826 Get Log Page Extended Data: Supported 00:24:43.826 Telemetry Log Pages: Not Supported 00:24:43.826 Persistent Event Log Pages: Not Supported 00:24:43.826 Supported Log Pages Log Page: May Support 00:24:43.826 Commands Supported & Effects Log Page: Not Supported 00:24:43.826 Feature Identifiers & Effects Log Page:May Support 00:24:43.826 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.826 Data Area 4 for Telemetry Log: Not Supported 00:24:43.826 Error Log Page Entries Supported: 128 00:24:43.826 Keep Alive: Supported 00:24:43.826 Keep Alive Granularity: 10000 ms 00:24:43.826 00:24:43.826 NVM Command Set Attributes 00:24:43.826 ========================== 00:24:43.826 Submission Queue Entry Size 00:24:43.826 Max: 64 00:24:43.826 Min: 64 00:24:43.826 Completion Queue Entry Size 00:24:43.826 Max: 16 00:24:43.826 Min: 16 00:24:43.826 Number of Namespaces: 32 00:24:43.826 Compare Command: Supported 00:24:43.826 Write Uncorrectable Command: Not Supported 00:24:43.826 Dataset Management Command: Supported 00:24:43.826 Write Zeroes Command: Supported 00:24:43.826 Set Features Save Field: Not Supported 00:24:43.826 Reservations: Supported 00:24:43.826 Timestamp: Not Supported 00:24:43.826 Copy: Supported 00:24:43.826 Volatile Write Cache: Present 00:24:43.826 Atomic Write Unit (Normal): 1 00:24:43.826 Atomic Write Unit (PFail): 1 00:24:43.826 Atomic Compare & Write Unit: 1 00:24:43.826 Fused Compare & Write: Supported 00:24:43.826 Scatter-Gather List 00:24:43.826 SGL Command Set: Supported 00:24:43.826 SGL Keyed: Supported 00:24:43.826 SGL Bit Bucket Descriptor: Not Supported 00:24:43.826 SGL Metadata Pointer: Not Supported 00:24:43.826 Oversized SGL: Not Supported 00:24:43.826 SGL Metadata Address: Not Supported 00:24:43.826 SGL Offset: Supported 00:24:43.827 Transport SGL Data Block: Not Supported 00:24:43.827 Replay Protected Memory Block: Not Supported 00:24:43.827 00:24:43.827 Firmware Slot Information 00:24:43.827 ========================= 00:24:43.827 Active slot: 1 00:24:43.827 Slot 1 Firmware Revision: 24.01.1 00:24:43.827 00:24:43.827 00:24:43.827 Commands Supported and Effects 00:24:43.827 ============================== 00:24:43.827 Admin Commands 00:24:43.827 -------------- 00:24:43.827 Get Log Page (02h): Supported 00:24:43.827 Identify (06h): Supported 00:24:43.827 Abort (08h): Supported 00:24:43.827 Set Features (09h): Supported 00:24:43.827 Get Features (0Ah): Supported 00:24:43.827 Asynchronous Event Request (0Ch): Supported 00:24:43.827 Keep Alive (18h): Supported 00:24:43.827 I/O Commands 00:24:43.827 ------------ 00:24:43.827 Flush (00h): Supported LBA-Change 00:24:43.827 Write (01h): Supported LBA-Change 00:24:43.827 Read (02h): Supported 00:24:43.827 Compare (05h): Supported 00:24:43.827 Write Zeroes (08h): Supported LBA-Change 00:24:43.827 Dataset Management (09h): Supported LBA-Change 00:24:43.827 Copy (19h): Supported LBA-Change 00:24:43.827 Unknown (79h): Supported LBA-Change 00:24:43.827 Unknown (7Ah): Supported 00:24:43.827 00:24:43.827 Error Log 00:24:43.827 ========= 00:24:43.827 00:24:43.827 Arbitration 00:24:43.827 =========== 00:24:43.827 Arbitration Burst: 1 00:24:43.827 00:24:43.827 Power Management 00:24:43.827 ================ 00:24:43.827 Number of Power States: 1 00:24:43.827 Current Power State: Power State #0 00:24:43.827 Power State #0: 00:24:43.827 Max Power: 0.00 W 00:24:43.827 Non-Operational State: Operational 00:24:43.827 Entry Latency: Not Reported 00:24:43.827 Exit Latency: Not Reported 00:24:43.827 Relative Read Throughput: 0 00:24:43.827 Relative Read Latency: 0 00:24:43.827 Relative Write Throughput: 0 00:24:43.827 Relative Write Latency: 0 00:24:43.827 Idle Power: Not Reported 00:24:43.827 Active Power: Not Reported 00:24:43.827 Non-Operational Permissive Mode: Not Supported 00:24:43.827 00:24:43.827 Health Information 00:24:43.827 ================== 00:24:43.827 Critical Warnings: 00:24:43.827 Available Spare Space: OK 00:24:43.827 Temperature: OK 00:24:43.827 Device Reliability: OK 00:24:43.827 Read Only: No 00:24:43.827 Volatile Memory Backup: OK 00:24:43.827 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:43.827 Temperature Threshol[2024-12-03 11:52:14.391738] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.391764] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.391769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391775] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391799] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:43.827 [2024-12-03 11:52:14.391808] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16425 doesn't match qid 00:24:43.827 [2024-12-03 11:52:14.391823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391829] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16425 doesn't match qid 00:24:43.827 [2024-12-03 11:52:14.391838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391844] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16425 doesn't match qid 00:24:43.827 [2024-12-03 11:52:14.391852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391858] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16425 doesn't match qid 00:24:43.827 [2024-12-03 11:52:14.391866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:5 sqhd:8e28 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391876] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.391903] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.391908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391916] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.391930] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391943] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.391948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.391954] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:43.827 [2024-12-03 11:52:14.391960] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:43.827 [2024-12-03 11:52:14.391966] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391975] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.391984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.392000] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.392012] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.392021] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.392028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.392052] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.392059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:43.827 [2024-12-03 11:52:14.392066] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.392074] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.827 [2024-12-03 11:52:14.392082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.827 [2024-12-03 11:52:14.392098] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.827 [2024-12-03 11:52:14.392103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392116] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392125] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392150] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392162] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392171] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392201] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392214] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392223] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392248] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392260] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392270] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392298] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392311] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392320] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392347] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392359] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392368] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392389] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392401] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392410] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392431] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392445] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392477] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392489] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392498] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392523] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392535] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392543] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392571] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392582] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392591] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392616] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392628] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392636] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392666] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392678] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392686] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392717] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:43.828 [2024-12-03 11:52:14.392728] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392737] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.828 [2024-12-03 11:52:14.392745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.828 [2024-12-03 11:52:14.392768] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.828 [2024-12-03 11:52:14.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.392780] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392788] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.392819] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.392825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.392831] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392840] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.392865] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.392870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.392876] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392885] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.392910] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.392916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.392922] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392930] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.392954] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.392959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.392965] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392974] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.392983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.392997] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393008] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393017] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393042] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393054] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393062] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393089] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393101] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393114] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393141] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393153] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393162] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393185] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393197] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393205] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393233] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393244] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393254] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393279] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393291] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393299] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393332] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393344] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393352] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393381] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393393] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393402] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.829 [2024-12-03 11:52:14.393425] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.829 [2024-12-03 11:52:14.393431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:43.829 [2024-12-03 11:52:14.393437] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:43.829 [2024-12-03 11:52:14.393445] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393473] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393484] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393493] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393522] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393533] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393543] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393570] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393582] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393591] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393620] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393631] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393640] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393663] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393675] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393684] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393707] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393719] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393727] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393758] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393770] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393779] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393800] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393821] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393845] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393856] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393865] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393888] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393900] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393908] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393932] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393943] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.393974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.393979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.393985] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.393994] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.394001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.394017] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.394022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.394029] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.394037] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.394045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.830 [2024-12-03 11:52:14.394068] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.830 [2024-12-03 11:52:14.394074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:43.830 [2024-12-03 11:52:14.394081] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:43.830 [2024-12-03 11:52:14.394090] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.831 [2024-12-03 11:52:14.394097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.831 [2024-12-03 11:52:14.398114] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.831 [2024-12-03 11:52:14.398122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:43.831 [2024-12-03 11:52:14.398128] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:43.831 [2024-12-03 11:52:14.398137] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:43.831 [2024-12-03 11:52:14.398145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:43.831 [2024-12-03 11:52:14.398161] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:43.831 [2024-12-03 11:52:14.398166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:24:43.831 [2024-12-03 11:52:14.398173] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:43.831 [2024-12-03 11:52:14.398179] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:44.089 d: 0 Kelvin (-273 Celsius) 00:24:44.089 Available Spare: 0% 00:24:44.089 Available Spare Threshold: 0% 00:24:44.089 Life Percentage Used: 0% 00:24:44.089 Data Units Read: 0 00:24:44.089 Data Units Written: 0 00:24:44.089 Host Read Commands: 0 00:24:44.089 Host Write Commands: 0 00:24:44.089 Controller Busy Time: 0 minutes 00:24:44.089 Power Cycles: 0 00:24:44.089 Power On Hours: 0 hours 00:24:44.089 Unsafe Shutdowns: 0 00:24:44.089 Unrecoverable Media Errors: 0 00:24:44.089 Lifetime Error Log Entries: 0 00:24:44.089 Warning Temperature Time: 0 minutes 00:24:44.089 Critical Temperature Time: 0 minutes 00:24:44.089 00:24:44.089 Number of Queues 00:24:44.089 ================ 00:24:44.089 Number of I/O Submission Queues: 127 00:24:44.089 Number of I/O Completion Queues: 127 00:24:44.089 00:24:44.089 Active Namespaces 00:24:44.089 ================= 00:24:44.089 Namespace ID:1 00:24:44.089 Error Recovery Timeout: Unlimited 00:24:44.089 Command Set Identifier: NVM (00h) 00:24:44.089 Deallocate: Supported 00:24:44.089 Deallocated/Unwritten Error: Not Supported 00:24:44.089 Deallocated Read Value: Unknown 00:24:44.089 Deallocate in Write Zeroes: Not Supported 00:24:44.089 Deallocated Guard Field: 0xFFFF 00:24:44.089 Flush: Supported 00:24:44.089 Reservation: Supported 00:24:44.089 Namespace Sharing Capabilities: Multiple Controllers 00:24:44.089 Size (in LBAs): 131072 (0GiB) 00:24:44.089 Capacity (in LBAs): 131072 (0GiB) 00:24:44.089 Utilization (in LBAs): 131072 (0GiB) 00:24:44.089 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:44.089 EUI64: ABCDEF0123456789 00:24:44.089 UUID: fd9e9963-e67b-4549-b0a7-50c329ccfe79 00:24:44.089 Thin Provisioning: Not Supported 00:24:44.089 Per-NS Atomic Units: Yes 00:24:44.089 Atomic Boundary Size (Normal): 0 00:24:44.089 Atomic Boundary Size (PFail): 0 00:24:44.089 Atomic Boundary Offset: 0 00:24:44.089 Maximum Single Source Range Length: 65535 00:24:44.089 Maximum Copy Length: 65535 00:24:44.089 Maximum Source Range Count: 1 00:24:44.089 NGUID/EUI64 Never Reused: No 00:24:44.089 Namespace Write Protected: No 00:24:44.089 Number of LBA Formats: 1 00:24:44.089 Current LBA Format: LBA Format #00 00:24:44.089 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:44.089 00:24:44.089 11:52:14 -- host/identify.sh@51 -- # sync 00:24:44.089 11:52:14 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.089 11:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.089 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:44.089 11:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.089 11:52:14 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:44.089 11:52:14 -- host/identify.sh@56 -- # nvmftestfini 00:24:44.089 11:52:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:44.089 11:52:14 -- nvmf/common.sh@116 -- # sync 00:24:44.089 11:52:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:44.089 11:52:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:44.089 11:52:14 -- nvmf/common.sh@119 -- # set +e 00:24:44.089 11:52:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:44.089 11:52:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:44.089 rmmod nvme_rdma 00:24:44.089 rmmod nvme_fabrics 00:24:44.089 11:52:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:44.089 11:52:14 -- nvmf/common.sh@123 -- # set -e 00:24:44.089 11:52:14 -- nvmf/common.sh@124 -- # return 0 00:24:44.089 11:52:14 -- nvmf/common.sh@477 -- # '[' -n 3847358 ']' 00:24:44.089 11:52:14 -- nvmf/common.sh@478 -- # killprocess 3847358 00:24:44.089 11:52:14 -- common/autotest_common.sh@936 -- # '[' -z 3847358 ']' 00:24:44.089 11:52:14 -- common/autotest_common.sh@940 -- # kill -0 3847358 00:24:44.089 11:52:14 -- common/autotest_common.sh@941 -- # uname 00:24:44.089 11:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.089 11:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3847358 00:24:44.089 11:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:44.089 11:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:44.089 11:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3847358' 00:24:44.089 killing process with pid 3847358 00:24:44.089 11:52:14 -- common/autotest_common.sh@955 -- # kill 3847358 00:24:44.089 [2024-12-03 11:52:14.563263] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:44.089 11:52:14 -- common/autotest_common.sh@960 -- # wait 3847358 00:24:44.347 11:52:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:44.347 11:52:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:44.347 00:24:44.347 real 0m9.052s 00:24:44.347 user 0m8.766s 00:24:44.347 sys 0m5.808s 00:24:44.347 11:52:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:44.347 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:44.347 ************************************ 00:24:44.347 END TEST nvmf_identify 00:24:44.347 ************************************ 00:24:44.347 11:52:14 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:44.347 11:52:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:44.347 11:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:44.347 11:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:44.347 ************************************ 00:24:44.347 START TEST nvmf_perf 00:24:44.347 ************************************ 00:24:44.347 11:52:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:44.605 * Looking for test storage... 00:24:44.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:44.605 11:52:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:44.605 11:52:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:44.605 11:52:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:44.605 11:52:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:44.605 11:52:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:44.605 11:52:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:44.605 11:52:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:44.605 11:52:15 -- scripts/common.sh@335 -- # IFS=.-: 00:24:44.605 11:52:15 -- scripts/common.sh@335 -- # read -ra ver1 00:24:44.605 11:52:15 -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.605 11:52:15 -- scripts/common.sh@336 -- # read -ra ver2 00:24:44.605 11:52:15 -- scripts/common.sh@337 -- # local 'op=<' 00:24:44.605 11:52:15 -- scripts/common.sh@339 -- # ver1_l=2 00:24:44.605 11:52:15 -- scripts/common.sh@340 -- # ver2_l=1 00:24:44.605 11:52:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:44.605 11:52:15 -- scripts/common.sh@343 -- # case "$op" in 00:24:44.605 11:52:15 -- scripts/common.sh@344 -- # : 1 00:24:44.605 11:52:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:44.605 11:52:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.605 11:52:15 -- scripts/common.sh@364 -- # decimal 1 00:24:44.605 11:52:15 -- scripts/common.sh@352 -- # local d=1 00:24:44.605 11:52:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.605 11:52:15 -- scripts/common.sh@354 -- # echo 1 00:24:44.605 11:52:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:44.605 11:52:15 -- scripts/common.sh@365 -- # decimal 2 00:24:44.605 11:52:15 -- scripts/common.sh@352 -- # local d=2 00:24:44.605 11:52:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.605 11:52:15 -- scripts/common.sh@354 -- # echo 2 00:24:44.605 11:52:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:44.605 11:52:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:44.605 11:52:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:44.605 11:52:15 -- scripts/common.sh@367 -- # return 0 00:24:44.605 11:52:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.605 11:52:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:44.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.605 --rc genhtml_branch_coverage=1 00:24:44.605 --rc genhtml_function_coverage=1 00:24:44.605 --rc genhtml_legend=1 00:24:44.605 --rc geninfo_all_blocks=1 00:24:44.605 --rc geninfo_unexecuted_blocks=1 00:24:44.605 00:24:44.605 ' 00:24:44.605 11:52:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:44.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.605 --rc genhtml_branch_coverage=1 00:24:44.605 --rc genhtml_function_coverage=1 00:24:44.605 --rc genhtml_legend=1 00:24:44.605 --rc geninfo_all_blocks=1 00:24:44.605 --rc geninfo_unexecuted_blocks=1 00:24:44.605 00:24:44.605 ' 00:24:44.605 11:52:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:44.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.605 --rc genhtml_branch_coverage=1 00:24:44.606 --rc genhtml_function_coverage=1 00:24:44.606 --rc genhtml_legend=1 00:24:44.606 --rc geninfo_all_blocks=1 00:24:44.606 --rc geninfo_unexecuted_blocks=1 00:24:44.606 00:24:44.606 ' 00:24:44.606 11:52:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:44.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.606 --rc genhtml_branch_coverage=1 00:24:44.606 --rc genhtml_function_coverage=1 00:24:44.606 --rc genhtml_legend=1 00:24:44.606 --rc geninfo_all_blocks=1 00:24:44.606 --rc geninfo_unexecuted_blocks=1 00:24:44.606 00:24:44.606 ' 00:24:44.606 11:52:15 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.606 11:52:15 -- nvmf/common.sh@7 -- # uname -s 00:24:44.606 11:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.606 11:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.606 11:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.606 11:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.606 11:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.606 11:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.606 11:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.606 11:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.606 11:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.606 11:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.606 11:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:44.606 11:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:44.606 11:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.606 11:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.606 11:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.606 11:52:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:44.606 11:52:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.606 11:52:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.606 11:52:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.606 11:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.606 11:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.606 11:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.606 11:52:15 -- paths/export.sh@5 -- # export PATH 00:24:44.606 11:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.606 11:52:15 -- nvmf/common.sh@46 -- # : 0 00:24:44.606 11:52:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:44.606 11:52:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:44.606 11:52:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:44.606 11:52:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.606 11:52:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.606 11:52:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:44.606 11:52:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:44.606 11:52:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:44.606 11:52:15 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:44.606 11:52:15 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:44.606 11:52:15 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:44.606 11:52:15 -- host/perf.sh@17 -- # nvmftestinit 00:24:44.606 11:52:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:44.606 11:52:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.606 11:52:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:44.606 11:52:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:44.606 11:52:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:44.606 11:52:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.606 11:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.606 11:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.606 11:52:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:44.606 11:52:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:44.606 11:52:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:44.606 11:52:15 -- common/autotest_common.sh@10 -- # set +x 00:24:51.162 11:52:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:51.162 11:52:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:51.162 11:52:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:51.162 11:52:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:51.162 11:52:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:51.162 11:52:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:51.162 11:52:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:51.162 11:52:21 -- nvmf/common.sh@294 -- # net_devs=() 00:24:51.162 11:52:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:51.162 11:52:21 -- nvmf/common.sh@295 -- # e810=() 00:24:51.162 11:52:21 -- nvmf/common.sh@295 -- # local -ga e810 00:24:51.162 11:52:21 -- nvmf/common.sh@296 -- # x722=() 00:24:51.162 11:52:21 -- nvmf/common.sh@296 -- # local -ga x722 00:24:51.162 11:52:21 -- nvmf/common.sh@297 -- # mlx=() 00:24:51.162 11:52:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:51.162 11:52:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.162 11:52:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:51.162 11:52:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:51.162 11:52:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:51.162 11:52:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:51.162 11:52:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:51.162 11:52:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.162 11:52:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:51.162 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:51.162 11:52:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.162 11:52:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.162 11:52:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:51.162 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:51.162 11:52:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.162 11:52:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:51.162 11:52:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:51.162 11:52:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.162 11:52:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.162 11:52:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.162 11:52:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.162 11:52:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:51.162 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:51.162 11:52:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.162 11:52:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.162 11:52:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.162 11:52:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.163 11:52:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.163 11:52:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:51.163 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:51.163 11:52:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.163 11:52:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:51.163 11:52:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:51.163 11:52:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:51.163 11:52:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:51.163 11:52:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:51.163 11:52:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:51.163 11:52:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:51.163 11:52:21 -- nvmf/common.sh@57 -- # uname 00:24:51.163 11:52:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:51.163 11:52:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:51.163 11:52:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:51.163 11:52:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:51.163 11:52:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:51.163 11:52:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:51.163 11:52:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:51.422 11:52:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:51.422 11:52:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:51.422 11:52:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:51.422 11:52:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:51.422 11:52:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.422 11:52:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:51.422 11:52:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:51.422 11:52:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.422 11:52:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:51.422 11:52:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:51.422 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.422 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:51.422 11:52:21 -- nvmf/common.sh@104 -- # continue 2 00:24:51.422 11:52:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:51.422 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.422 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.422 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:51.422 11:52:21 -- nvmf/common.sh@104 -- # continue 2 00:24:51.422 11:52:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:51.422 11:52:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:51.422 11:52:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:51.422 11:52:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:51.422 11:52:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:51.422 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.422 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:51.422 altname enp217s0f0np0 00:24:51.422 altname ens818f0np0 00:24:51.422 inet 192.168.100.8/24 scope global mlx_0_0 00:24:51.422 valid_lft forever preferred_lft forever 00:24:51.422 11:52:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:51.422 11:52:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:51.422 11:52:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:51.422 11:52:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:51.422 11:52:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:51.422 11:52:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:51.422 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.422 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:51.422 altname enp217s0f1np1 00:24:51.422 altname ens818f1np1 00:24:51.422 inet 192.168.100.9/24 scope global mlx_0_1 00:24:51.422 valid_lft forever preferred_lft forever 00:24:51.422 11:52:21 -- nvmf/common.sh@410 -- # return 0 00:24:51.422 11:52:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:51.422 11:52:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:51.422 11:52:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:51.422 11:52:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:51.422 11:52:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:51.422 11:52:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.422 11:52:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:51.422 11:52:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:51.423 11:52:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.423 11:52:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:51.423 11:52:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:51.423 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.423 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.423 11:52:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:51.423 11:52:21 -- nvmf/common.sh@104 -- # continue 2 00:24:51.423 11:52:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:51.423 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.423 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.423 11:52:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.423 11:52:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.423 11:52:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:51.423 11:52:21 -- nvmf/common.sh@104 -- # continue 2 00:24:51.423 11:52:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:51.423 11:52:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:51.423 11:52:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:51.423 11:52:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:51.423 11:52:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:51.423 11:52:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:51.423 11:52:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:51.423 11:52:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:51.423 192.168.100.9' 00:24:51.423 11:52:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:51.423 192.168.100.9' 00:24:51.423 11:52:21 -- nvmf/common.sh@445 -- # head -n 1 00:24:51.423 11:52:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:51.423 11:52:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:51.423 192.168.100.9' 00:24:51.423 11:52:21 -- nvmf/common.sh@446 -- # head -n 1 00:24:51.423 11:52:21 -- nvmf/common.sh@446 -- # tail -n +2 00:24:51.423 11:52:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:51.423 11:52:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:51.423 11:52:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:51.423 11:52:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:51.423 11:52:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:51.423 11:52:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:51.423 11:52:21 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:51.423 11:52:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:51.423 11:52:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.423 11:52:21 -- common/autotest_common.sh@10 -- # set +x 00:24:51.423 11:52:21 -- nvmf/common.sh@469 -- # nvmfpid=3851090 00:24:51.423 11:52:21 -- nvmf/common.sh@470 -- # waitforlisten 3851090 00:24:51.423 11:52:21 -- common/autotest_common.sh@829 -- # '[' -z 3851090 ']' 00:24:51.423 11:52:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.423 11:52:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.423 11:52:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.423 11:52:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.423 11:52:21 -- common/autotest_common.sh@10 -- # set +x 00:24:51.423 11:52:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.423 [2024-12-03 11:52:22.014645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:51.423 [2024-12-03 11:52:22.014696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.681 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.681 [2024-12-03 11:52:22.083650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.681 [2024-12-03 11:52:22.153416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:51.681 [2024-12-03 11:52:22.153530] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.681 [2024-12-03 11:52:22.153539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.681 [2024-12-03 11:52:22.153548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.681 [2024-12-03 11:52:22.153596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.681 [2024-12-03 11:52:22.153694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.681 [2024-12-03 11:52:22.153778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.681 [2024-12-03 11:52:22.153780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.246 11:52:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.247 11:52:22 -- common/autotest_common.sh@862 -- # return 0 00:24:52.247 11:52:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:52.247 11:52:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.247 11:52:22 -- common/autotest_common.sh@10 -- # set +x 00:24:52.504 11:52:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.504 11:52:22 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:52.504 11:52:22 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:55.785 11:52:25 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:55.785 11:52:25 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:55.785 11:52:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:55.785 11:52:26 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:55.785 11:52:26 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:55.785 11:52:26 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:55.785 11:52:26 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:55.785 11:52:26 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:55.785 11:52:26 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:24:56.042 [2024-12-03 11:52:26.492296] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:24:56.042 [2024-12-03 11:52:26.512860] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x907540/0x914fc0) succeed. 00:24:56.043 [2024-12-03 11:52:26.522161] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x908b30/0x956660) succeed. 00:24:56.043 11:52:26 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.299 11:52:26 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:56.299 11:52:26 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.555 11:52:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:56.555 11:52:27 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:56.812 11:52:27 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:56.812 [2024-12-03 11:52:27.368572] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:56.812 11:52:27 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:57.069 11:52:27 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:57.069 11:52:27 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:57.069 11:52:27 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:57.070 11:52:27 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:58.489 Initializing NVMe Controllers 00:24:58.489 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:58.489 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:58.489 Initialization complete. Launching workers. 00:24:58.489 ======================================================== 00:24:58.489 Latency(us) 00:24:58.489 Device Information : IOPS MiB/s Average min max 00:24:58.489 PCIE (0000:d8:00.0) NSID 1 from core 0: 103481.10 404.22 308.94 9.97 4247.02 00:24:58.489 ======================================================== 00:24:58.489 Total : 103481.10 404.22 308.94 9.97 4247.02 00:24:58.489 00:24:58.489 11:52:28 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:58.489 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.825 Initializing NVMe Controllers 00:25:01.825 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.825 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:01.825 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:01.825 Initialization complete. Launching workers. 00:25:01.825 ======================================================== 00:25:01.825 Latency(us) 00:25:01.825 Device Information : IOPS MiB/s Average min max 00:25:01.825 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6780.22 26.49 147.29 48.12 5075.19 00:25:01.825 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5262.06 20.55 189.85 66.69 5038.12 00:25:01.825 ======================================================== 00:25:01.825 Total : 12042.28 47.04 165.89 48.12 5075.19 00:25:01.825 00:25:01.825 11:52:32 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:01.825 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.102 Initializing NVMe Controllers 00:25:05.102 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.102 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.102 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.102 Initialization complete. Launching workers. 00:25:05.102 ======================================================== 00:25:05.102 Latency(us) 00:25:05.102 Device Information : IOPS MiB/s Average min max 00:25:05.102 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19475.43 76.08 1642.66 467.33 5448.83 00:25:05.102 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4049.12 15.82 7963.47 4791.91 10136.17 00:25:05.102 ======================================================== 00:25:05.102 Total : 23524.55 91.89 2730.62 467.33 10136.17 00:25:05.102 00:25:05.102 11:52:35 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:05.102 11:52:35 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:05.102 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.365 Initializing NVMe Controllers 00:25:10.365 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.365 Controller IO queue size 128, less than required. 00:25:10.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.365 Controller IO queue size 128, less than required. 00:25:10.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.365 Initialization complete. Launching workers. 00:25:10.365 ======================================================== 00:25:10.365 Latency(us) 00:25:10.365 Device Information : IOPS MiB/s Average min max 00:25:10.365 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4078.50 1019.62 31595.33 14031.17 68622.98 00:25:10.365 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4142.00 1035.50 30652.11 14896.32 49416.66 00:25:10.365 ======================================================== 00:25:10.365 Total : 8220.50 2055.12 31120.08 14031.17 68622.98 00:25:10.365 00:25:10.365 11:52:39 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:10.365 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.365 No valid NVMe controllers or AIO or URING devices found 00:25:10.365 Initializing NVMe Controllers 00:25:10.365 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.365 Controller IO queue size 128, less than required. 00:25:10.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.366 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:10.366 Controller IO queue size 128, less than required. 00:25:10.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.366 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:10.366 WARNING: Some requested NVMe devices were skipped 00:25:10.366 11:52:40 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:10.366 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.539 Initializing NVMe Controllers 00:25:14.539 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.539 Controller IO queue size 128, less than required. 00:25:14.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.539 Controller IO queue size 128, less than required. 00:25:14.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.539 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.539 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.539 Initialization complete. Launching workers. 00:25:14.539 00:25:14.539 ==================== 00:25:14.539 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:14.539 RDMA transport: 00:25:14.539 dev name: mlx5_0 00:25:14.539 polls: 419141 00:25:14.539 idle_polls: 415029 00:25:14.539 completions: 46259 00:25:14.539 queued_requests: 1 00:25:14.539 total_send_wrs: 23193 00:25:14.539 send_doorbell_updates: 3907 00:25:14.539 total_recv_wrs: 23193 00:25:14.539 recv_doorbell_updates: 3907 00:25:14.539 --------------------------------- 00:25:14.539 00:25:14.539 ==================== 00:25:14.539 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:14.539 RDMA transport: 00:25:14.539 dev name: mlx5_0 00:25:14.539 polls: 417161 00:25:14.539 idle_polls: 416877 00:25:14.539 completions: 20497 00:25:14.539 queued_requests: 1 00:25:14.539 total_send_wrs: 10312 00:25:14.539 send_doorbell_updates: 250 00:25:14.539 total_recv_wrs: 10312 00:25:14.539 recv_doorbell_updates: 250 00:25:14.539 --------------------------------- 00:25:14.539 ======================================================== 00:25:14.539 Latency(us) 00:25:14.539 Device Information : IOPS MiB/s Average min max 00:25:14.539 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5820.97 1455.24 21986.10 8874.57 52517.13 00:25:14.540 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2605.46 651.36 48980.83 28098.76 73755.70 00:25:14.540 ======================================================== 00:25:14.540 Total : 8426.43 2106.61 30332.89 8874.57 73755.70 00:25:14.540 00:25:14.540 11:52:44 -- host/perf.sh@66 -- # sync 00:25:14.540 11:52:44 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.540 11:52:44 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:14.540 11:52:44 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:14.540 11:52:44 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:21.087 11:52:50 -- host/perf.sh@72 -- # ls_guid=fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7 00:25:21.087 11:52:50 -- host/perf.sh@73 -- # get_lvs_free_mb fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7 00:25:21.087 11:52:50 -- common/autotest_common.sh@1353 -- # local lvs_uuid=fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7 00:25:21.087 11:52:50 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:21.087 11:52:50 -- common/autotest_common.sh@1355 -- # local fc 00:25:21.087 11:52:50 -- common/autotest_common.sh@1356 -- # local cs 00:25:21.087 11:52:50 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:21.087 11:52:51 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:21.087 { 00:25:21.087 "uuid": "fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7", 00:25:21.087 "name": "lvs_0", 00:25:21.087 "base_bdev": "Nvme0n1", 00:25:21.087 "total_data_clusters": 476466, 00:25:21.087 "free_clusters": 476466, 00:25:21.087 "block_size": 512, 00:25:21.087 "cluster_size": 4194304 00:25:21.087 } 00:25:21.087 ]' 00:25:21.087 11:52:51 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7") .free_clusters' 00:25:21.087 11:52:51 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:21.087 11:52:51 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7") .cluster_size' 00:25:21.087 11:52:51 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:21.087 11:52:51 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:21.087 11:52:51 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:21.087 1905864 00:25:21.087 11:52:51 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:21.087 11:52:51 -- host/perf.sh@78 -- # free_mb=20480 00:25:21.087 11:52:51 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7 lbd_0 20480 00:25:21.344 11:52:51 -- host/perf.sh@80 -- # lb_guid=e68fe4aa-e6f1-41ba-b2db-4867b11c23f2 00:25:21.344 11:52:51 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e68fe4aa-e6f1-41ba-b2db-4867b11c23f2 lvs_n_0 00:25:23.238 11:52:53 -- host/perf.sh@83 -- # ls_nested_guid=ec6b86dc-b65e-440d-8446-7d327c85f41b 00:25:23.238 11:52:53 -- host/perf.sh@84 -- # get_lvs_free_mb ec6b86dc-b65e-440d-8446-7d327c85f41b 00:25:23.238 11:52:53 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ec6b86dc-b65e-440d-8446-7d327c85f41b 00:25:23.238 11:52:53 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:23.238 11:52:53 -- common/autotest_common.sh@1355 -- # local fc 00:25:23.239 11:52:53 -- common/autotest_common.sh@1356 -- # local cs 00:25:23.239 11:52:53 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:23.239 11:52:53 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:23.239 { 00:25:23.239 "uuid": "fe6aef8d-b78c-46b3-9c2b-5aa726b7dca7", 00:25:23.239 "name": "lvs_0", 00:25:23.239 "base_bdev": "Nvme0n1", 00:25:23.239 "total_data_clusters": 476466, 00:25:23.239 "free_clusters": 471346, 00:25:23.239 "block_size": 512, 00:25:23.239 "cluster_size": 4194304 00:25:23.239 }, 00:25:23.239 { 00:25:23.239 "uuid": "ec6b86dc-b65e-440d-8446-7d327c85f41b", 00:25:23.239 "name": "lvs_n_0", 00:25:23.239 "base_bdev": "e68fe4aa-e6f1-41ba-b2db-4867b11c23f2", 00:25:23.239 "total_data_clusters": 5114, 00:25:23.239 "free_clusters": 5114, 00:25:23.239 "block_size": 512, 00:25:23.239 "cluster_size": 4194304 00:25:23.239 } 00:25:23.239 ]' 00:25:23.239 11:52:53 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ec6b86dc-b65e-440d-8446-7d327c85f41b") .free_clusters' 00:25:23.496 11:52:53 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:23.496 11:52:53 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ec6b86dc-b65e-440d-8446-7d327c85f41b") .cluster_size' 00:25:23.496 11:52:53 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:23.496 11:52:53 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:23.496 11:52:53 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:23.496 20456 00:25:23.496 11:52:53 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:23.496 11:52:53 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec6b86dc-b65e-440d-8446-7d327c85f41b lbd_nest_0 20456 00:25:23.496 11:52:54 -- host/perf.sh@88 -- # lb_nested_guid=25eb790e-0290-443f-bcfb-3027c7d0ed43 00:25:23.496 11:52:54 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.753 11:52:54 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:23.753 11:52:54 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 25eb790e-0290-443f-bcfb-3027c7d0ed43 00:25:24.011 11:52:54 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:24.269 11:52:54 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:24.269 11:52:54 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:24.269 11:52:54 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:24.269 11:52:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:24.269 11:52:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:24.269 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.452 Initializing NVMe Controllers 00:25:36.452 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.452 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.452 Initialization complete. Launching workers. 00:25:36.452 ======================================================== 00:25:36.452 Latency(us) 00:25:36.452 Device Information : IOPS MiB/s Average min max 00:25:36.452 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5954.70 2.91 167.66 67.13 5073.21 00:25:36.452 ======================================================== 00:25:36.452 Total : 5954.70 2.91 167.66 67.13 5073.21 00:25:36.452 00:25:36.452 11:53:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:36.452 11:53:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:36.452 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.631 Initializing NVMe Controllers 00:25:48.631 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.631 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:48.631 Initialization complete. Launching workers. 00:25:48.631 ======================================================== 00:25:48.631 Latency(us) 00:25:48.631 Device Information : IOPS MiB/s Average min max 00:25:48.631 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2690.51 336.31 371.48 154.27 7125.68 00:25:48.631 ======================================================== 00:25:48.631 Total : 2690.51 336.31 371.48 154.27 7125.68 00:25:48.631 00:25:48.631 11:53:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:48.631 11:53:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:48.631 11:53:17 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:48.631 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.590 Initializing NVMe Controllers 00:25:58.590 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:58.590 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:58.590 Initialization complete. Launching workers. 00:25:58.590 ======================================================== 00:25:58.590 Latency(us) 00:25:58.590 Device Information : IOPS MiB/s Average min max 00:25:58.590 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12360.80 6.04 2589.33 835.75 8489.88 00:25:58.590 ======================================================== 00:25:58.590 Total : 12360.80 6.04 2589.33 835.75 8489.88 00:25:58.590 00:25:58.590 11:53:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:58.590 11:53:28 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:58.590 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.777 Initializing NVMe Controllers 00:26:10.777 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.777 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:10.777 Initialization complete. Launching workers. 00:26:10.777 ======================================================== 00:26:10.777 Latency(us) 00:26:10.777 Device Information : IOPS MiB/s Average min max 00:26:10.777 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4011.77 501.47 7976.28 5894.25 15867.71 00:26:10.777 ======================================================== 00:26:10.777 Total : 4011.77 501.47 7976.28 5894.25 15867.71 00:26:10.777 00:26:10.777 11:53:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:10.777 11:53:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:10.777 11:53:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:10.777 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.067 Initializing NVMe Controllers 00:26:23.067 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:23.067 Controller IO queue size 128, less than required. 00:26:23.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:23.067 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:23.067 Initialization complete. Launching workers. 00:26:23.067 ======================================================== 00:26:23.067 Latency(us) 00:26:23.067 Device Information : IOPS MiB/s Average min max 00:26:23.067 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19776.30 9.66 6474.66 1627.88 15826.65 00:26:23.067 ======================================================== 00:26:23.067 Total : 19776.30 9.66 6474.66 1627.88 15826.65 00:26:23.067 00:26:23.067 11:53:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:23.067 11:53:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:23.067 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.034 Initializing NVMe Controllers 00:26:33.034 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.034 Controller IO queue size 128, less than required. 00:26:33.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:33.035 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.035 Initialization complete. Launching workers. 00:26:33.035 ======================================================== 00:26:33.035 Latency(us) 00:26:33.035 Device Information : IOPS MiB/s Average min max 00:26:33.035 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11419.89 1427.49 11211.70 3270.28 23173.08 00:26:33.035 ======================================================== 00:26:33.035 Total : 11419.89 1427.49 11211.70 3270.28 23173.08 00:26:33.035 00:26:33.035 11:54:02 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.035 11:54:02 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25eb790e-0290-443f-bcfb-3027c7d0ed43 00:26:33.035 11:54:03 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:33.295 11:54:03 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e68fe4aa-e6f1-41ba-b2db-4867b11c23f2 00:26:33.553 11:54:04 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:33.812 11:54:04 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:33.812 11:54:04 -- host/perf.sh@114 -- # nvmftestfini 00:26:33.812 11:54:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:33.812 11:54:04 -- nvmf/common.sh@116 -- # sync 00:26:33.812 11:54:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:33.812 11:54:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:33.812 11:54:04 -- nvmf/common.sh@119 -- # set +e 00:26:33.812 11:54:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:33.812 11:54:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:33.812 rmmod nvme_rdma 00:26:33.812 rmmod nvme_fabrics 00:26:33.812 11:54:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:33.812 11:54:04 -- nvmf/common.sh@123 -- # set -e 00:26:33.812 11:54:04 -- nvmf/common.sh@124 -- # return 0 00:26:33.812 11:54:04 -- nvmf/common.sh@477 -- # '[' -n 3851090 ']' 00:26:33.812 11:54:04 -- nvmf/common.sh@478 -- # killprocess 3851090 00:26:33.812 11:54:04 -- common/autotest_common.sh@936 -- # '[' -z 3851090 ']' 00:26:33.812 11:54:04 -- common/autotest_common.sh@940 -- # kill -0 3851090 00:26:33.812 11:54:04 -- common/autotest_common.sh@941 -- # uname 00:26:33.812 11:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:33.812 11:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3851090 00:26:33.812 11:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:33.812 11:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:33.812 11:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3851090' 00:26:33.812 killing process with pid 3851090 00:26:33.812 11:54:04 -- common/autotest_common.sh@955 -- # kill 3851090 00:26:33.812 11:54:04 -- common/autotest_common.sh@960 -- # wait 3851090 00:26:36.340 11:54:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:36.340 11:54:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:26:36.340 00:26:36.340 real 1m51.882s 00:26:36.340 user 7m2.145s 00:26:36.340 sys 0m7.209s 00:26:36.340 11:54:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:36.340 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:26:36.340 ************************************ 00:26:36.340 END TEST nvmf_perf 00:26:36.340 ************************************ 00:26:36.340 11:54:06 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:36.340 11:54:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:36.340 11:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.340 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:26:36.340 ************************************ 00:26:36.340 START TEST nvmf_fio_host 00:26:36.340 ************************************ 00:26:36.340 11:54:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:36.340 * Looking for test storage... 00:26:36.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:36.340 11:54:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:36.340 11:54:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:36.340 11:54:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:36.600 11:54:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:36.600 11:54:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:36.600 11:54:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:36.600 11:54:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:36.600 11:54:06 -- scripts/common.sh@335 -- # IFS=.-: 00:26:36.600 11:54:07 -- scripts/common.sh@335 -- # read -ra ver1 00:26:36.600 11:54:07 -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.600 11:54:07 -- scripts/common.sh@336 -- # read -ra ver2 00:26:36.600 11:54:07 -- scripts/common.sh@337 -- # local 'op=<' 00:26:36.600 11:54:07 -- scripts/common.sh@339 -- # ver1_l=2 00:26:36.600 11:54:07 -- scripts/common.sh@340 -- # ver2_l=1 00:26:36.600 11:54:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:36.600 11:54:07 -- scripts/common.sh@343 -- # case "$op" in 00:26:36.600 11:54:07 -- scripts/common.sh@344 -- # : 1 00:26:36.600 11:54:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:36.600 11:54:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.600 11:54:07 -- scripts/common.sh@364 -- # decimal 1 00:26:36.600 11:54:07 -- scripts/common.sh@352 -- # local d=1 00:26:36.600 11:54:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.600 11:54:07 -- scripts/common.sh@354 -- # echo 1 00:26:36.600 11:54:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:36.600 11:54:07 -- scripts/common.sh@365 -- # decimal 2 00:26:36.600 11:54:07 -- scripts/common.sh@352 -- # local d=2 00:26:36.600 11:54:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.600 11:54:07 -- scripts/common.sh@354 -- # echo 2 00:26:36.600 11:54:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:36.600 11:54:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:36.600 11:54:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:36.600 11:54:07 -- scripts/common.sh@367 -- # return 0 00:26:36.600 11:54:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.600 11:54:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.600 --rc genhtml_branch_coverage=1 00:26:36.600 --rc genhtml_function_coverage=1 00:26:36.600 --rc genhtml_legend=1 00:26:36.600 --rc geninfo_all_blocks=1 00:26:36.600 --rc geninfo_unexecuted_blocks=1 00:26:36.600 00:26:36.600 ' 00:26:36.600 11:54:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.600 --rc genhtml_branch_coverage=1 00:26:36.600 --rc genhtml_function_coverage=1 00:26:36.600 --rc genhtml_legend=1 00:26:36.600 --rc geninfo_all_blocks=1 00:26:36.600 --rc geninfo_unexecuted_blocks=1 00:26:36.600 00:26:36.600 ' 00:26:36.600 11:54:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.600 --rc genhtml_branch_coverage=1 00:26:36.600 --rc genhtml_function_coverage=1 00:26:36.600 --rc genhtml_legend=1 00:26:36.600 --rc geninfo_all_blocks=1 00:26:36.600 --rc geninfo_unexecuted_blocks=1 00:26:36.600 00:26:36.600 ' 00:26:36.600 11:54:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.600 --rc genhtml_branch_coverage=1 00:26:36.600 --rc genhtml_function_coverage=1 00:26:36.600 --rc genhtml_legend=1 00:26:36.600 --rc geninfo_all_blocks=1 00:26:36.600 --rc geninfo_unexecuted_blocks=1 00:26:36.600 00:26:36.600 ' 00:26:36.600 11:54:07 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:36.600 11:54:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.600 11:54:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.600 11:54:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.600 11:54:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.600 11:54:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.600 11:54:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.600 11:54:07 -- paths/export.sh@5 -- # export PATH 00:26:36.600 11:54:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.600 11:54:07 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.600 11:54:07 -- nvmf/common.sh@7 -- # uname -s 00:26:36.600 11:54:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.601 11:54:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.601 11:54:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.601 11:54:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.601 11:54:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.601 11:54:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.601 11:54:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.601 11:54:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.601 11:54:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.601 11:54:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.601 11:54:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:36.601 11:54:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:36.601 11:54:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.601 11:54:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.601 11:54:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.601 11:54:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:36.601 11:54:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.601 11:54:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.601 11:54:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.601 11:54:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.601 11:54:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.601 11:54:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.601 11:54:07 -- paths/export.sh@5 -- # export PATH 00:26:36.601 11:54:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.601 11:54:07 -- nvmf/common.sh@46 -- # : 0 00:26:36.601 11:54:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:36.601 11:54:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:36.601 11:54:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:36.601 11:54:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.601 11:54:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.601 11:54:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:36.601 11:54:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:36.601 11:54:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:36.601 11:54:07 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:36.601 11:54:07 -- host/fio.sh@14 -- # nvmftestinit 00:26:36.601 11:54:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:26:36.601 11:54:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.601 11:54:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:36.601 11:54:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:36.601 11:54:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:36.601 11:54:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.601 11:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.601 11:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.601 11:54:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:36.601 11:54:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:36.601 11:54:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:36.601 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:26:43.165 11:54:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:43.165 11:54:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:43.165 11:54:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:43.165 11:54:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:43.165 11:54:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:43.165 11:54:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:43.165 11:54:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:43.165 11:54:13 -- nvmf/common.sh@294 -- # net_devs=() 00:26:43.165 11:54:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:43.165 11:54:13 -- nvmf/common.sh@295 -- # e810=() 00:26:43.165 11:54:13 -- nvmf/common.sh@295 -- # local -ga e810 00:26:43.165 11:54:13 -- nvmf/common.sh@296 -- # x722=() 00:26:43.165 11:54:13 -- nvmf/common.sh@296 -- # local -ga x722 00:26:43.165 11:54:13 -- nvmf/common.sh@297 -- # mlx=() 00:26:43.165 11:54:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:43.165 11:54:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.165 11:54:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:43.165 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:43.165 11:54:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:43.165 11:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:43.165 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:43.165 11:54:13 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:43.165 11:54:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.165 11:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.165 11:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:43.165 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:43.165 11:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.165 11:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.165 11:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:43.165 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:43.165 11:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.165 11:54:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:43.165 11:54:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@408 -- # rdma_device_init 00:26:43.165 11:54:13 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:26:43.165 11:54:13 -- nvmf/common.sh@57 -- # uname 00:26:43.165 11:54:13 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:26:43.165 11:54:13 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:26:43.165 11:54:13 -- nvmf/common.sh@62 -- # modprobe ib_core 00:26:43.165 11:54:13 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:26:43.165 11:54:13 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:26:43.165 11:54:13 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:26:43.165 11:54:13 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:26:43.165 11:54:13 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:26:43.165 11:54:13 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:26:43.165 11:54:13 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:43.165 11:54:13 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:26:43.165 11:54:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:43.165 11:54:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:43.165 11:54:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:43.165 11:54:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:43.165 11:54:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:43.165 11:54:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:43.165 11:54:13 -- nvmf/common.sh@104 -- # continue 2 00:26:43.165 11:54:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.165 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:43.165 11:54:13 -- nvmf/common.sh@104 -- # continue 2 00:26:43.165 11:54:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:43.165 11:54:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:26:43.165 11:54:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:43.165 11:54:13 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:26:43.165 11:54:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:26:43.165 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:43.165 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:43.165 altname enp217s0f0np0 00:26:43.165 altname ens818f0np0 00:26:43.165 inet 192.168.100.8/24 scope global mlx_0_0 00:26:43.165 valid_lft forever preferred_lft forever 00:26:43.165 11:54:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:43.165 11:54:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:26:43.165 11:54:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:43.165 11:54:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:43.165 11:54:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:26:43.165 11:54:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:26:43.165 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:43.165 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:43.165 altname enp217s0f1np1 00:26:43.165 altname ens818f1np1 00:26:43.165 inet 192.168.100.9/24 scope global mlx_0_1 00:26:43.165 valid_lft forever preferred_lft forever 00:26:43.165 11:54:13 -- nvmf/common.sh@410 -- # return 0 00:26:43.165 11:54:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:43.165 11:54:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:43.165 11:54:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:26:43.165 11:54:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:26:43.424 11:54:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:26:43.424 11:54:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:43.424 11:54:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:43.424 11:54:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:43.424 11:54:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:43.424 11:54:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:43.424 11:54:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:43.424 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.424 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:43.424 11:54:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:43.424 11:54:13 -- nvmf/common.sh@104 -- # continue 2 00:26:43.424 11:54:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:43.424 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.424 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:43.424 11:54:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:43.424 11:54:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:43.424 11:54:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:43.424 11:54:13 -- nvmf/common.sh@104 -- # continue 2 00:26:43.424 11:54:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:43.424 11:54:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:26:43.424 11:54:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:43.424 11:54:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:43.424 11:54:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:26:43.424 11:54:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:43.424 11:54:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:43.424 11:54:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:26:43.424 192.168.100.9' 00:26:43.424 11:54:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:26:43.424 192.168.100.9' 00:26:43.424 11:54:13 -- nvmf/common.sh@445 -- # head -n 1 00:26:43.424 11:54:13 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:43.424 11:54:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:26:43.424 192.168.100.9' 00:26:43.424 11:54:13 -- nvmf/common.sh@446 -- # tail -n +2 00:26:43.424 11:54:13 -- nvmf/common.sh@446 -- # head -n 1 00:26:43.424 11:54:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:43.425 11:54:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:26:43.425 11:54:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:43.425 11:54:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:26:43.425 11:54:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:26:43.425 11:54:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:26:43.425 11:54:13 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:43.425 11:54:13 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:43.425 11:54:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:43.425 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.425 11:54:13 -- host/fio.sh@24 -- # nvmfpid=3872618 00:26:43.425 11:54:13 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:43.425 11:54:13 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.425 11:54:13 -- host/fio.sh@28 -- # waitforlisten 3872618 00:26:43.425 11:54:13 -- common/autotest_common.sh@829 -- # '[' -z 3872618 ']' 00:26:43.425 11:54:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.425 11:54:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.425 11:54:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.425 11:54:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.425 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.425 [2024-12-03 11:54:13.932969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:43.425 [2024-12-03 11:54:13.933014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.425 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.425 [2024-12-03 11:54:14.000696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.683 [2024-12-03 11:54:14.074387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:43.683 [2024-12-03 11:54:14.074498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.683 [2024-12-03 11:54:14.074509] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.683 [2024-12-03 11:54:14.074517] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.683 [2024-12-03 11:54:14.074557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.683 [2024-12-03 11:54:14.074650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.683 [2024-12-03 11:54:14.074739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.683 [2024-12-03 11:54:14.074741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.249 11:54:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.249 11:54:14 -- common/autotest_common.sh@862 -- # return 0 00:26:44.249 11:54:14 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:44.508 [2024-12-03 11:54:14.922350] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12a3090/0x12a7580) succeed. 00:26:44.508 [2024-12-03 11:54:14.931720] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12a4680/0x12e8c20) succeed. 00:26:44.508 11:54:15 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:44.508 11:54:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:44.508 11:54:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.508 11:54:15 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:44.765 Malloc1 00:26:44.765 11:54:15 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.023 11:54:15 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:45.281 11:54:15 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:45.281 [2024-12-03 11:54:15.848815] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:45.281 11:54:15 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:45.540 11:54:16 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:45.540 11:54:16 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:45.540 11:54:16 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:45.540 11:54:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:45.540 11:54:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.540 11:54:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:45.540 11:54:16 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.540 11:54:16 -- common/autotest_common.sh@1330 -- # shift 00:26:45.540 11:54:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:45.540 11:54:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:45.540 11:54:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:45.540 11:54:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:45.540 11:54:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:45.540 11:54:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:45.540 11:54:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:45.540 11:54:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:45.798 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:45.798 fio-3.35 00:26:45.798 Starting 1 thread 00:26:46.057 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.587 00:26:48.587 test: (groupid=0, jobs=1): err= 0: pid=3873104: Tue Dec 3 11:54:18 2024 00:26:48.587 read: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(148MiB/2004msec) 00:26:48.587 slat (nsec): min=1330, max=21671, avg=1458.77, stdev=353.45 00:26:48.587 clat (usec): min=1501, max=6347, avg=3355.80, stdev=79.09 00:26:48.587 lat (usec): min=1517, max=6348, avg=3357.26, stdev=79.02 00:26:48.587 clat percentiles (usec): 00:26:48.587 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3359], 00:26:48.587 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:26:48.587 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3392], 00:26:48.587 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4686], 99.95th=[ 5538], 00:26:48.587 | 99.99th=[ 5997] 00:26:48.587 bw ( KiB/s): min=74208, max=76416, per=100.00%, avg=75758.00, stdev=1043.73, samples=4 00:26:48.587 iops : min=18552, max=19104, avg=18939.50, stdev=260.93, samples=4 00:26:48.587 write: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(148MiB/2004msec); 0 zone resets 00:26:48.587 slat (nsec): min=1368, max=17270, avg=1560.23, stdev=373.76 00:26:48.587 clat (usec): min=2235, max=6011, avg=3353.44, stdev=68.98 00:26:48.587 lat (usec): min=2244, max=6013, avg=3355.00, stdev=68.91 00:26:48.587 clat percentiles (usec): 00:26:48.587 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:26:48.587 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:26:48.587 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3392], 00:26:48.587 | 99.00th=[ 3392], 99.50th=[ 3392], 99.90th=[ 4228], 99.95th=[ 5080], 00:26:48.587 | 99.99th=[ 5932] 00:26:48.587 bw ( KiB/s): min=74312, max=76368, per=100.00%, avg=75804.00, stdev=997.71, samples=4 00:26:48.587 iops : min=18578, max=19094, avg=18951.00, stdev=249.55, samples=4 00:26:48.587 lat (msec) : 2=0.01%, 4=99.88%, 10=0.11% 00:26:48.587 cpu : usr=99.50%, sys=0.05%, ctx=13, majf=0, minf=2 00:26:48.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:48.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.587 issued rwts: total=37948,37960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.587 00:26:48.587 Run status group 0 (all jobs): 00:26:48.587 READ: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=148MiB (155MB), run=2004-2004msec 00:26:48.587 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=148MiB (155MB), run=2004-2004msec 00:26:48.587 11:54:18 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:48.587 11:54:18 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:48.587 11:54:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:48.587 11:54:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:48.587 11:54:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:48.587 11:54:18 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.587 11:54:18 -- common/autotest_common.sh@1330 -- # shift 00:26:48.587 11:54:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:48.587 11:54:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:48.587 11:54:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:48.587 11:54:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:48.587 11:54:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:48.587 11:54:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:48.587 11:54:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:48.587 11:54:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:48.587 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:48.587 fio-3.35 00:26:48.587 Starting 1 thread 00:26:48.587 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.117 00:26:51.117 test: (groupid=0, jobs=1): err= 0: pid=3873748: Tue Dec 3 11:54:21 2024 00:26:51.117 read: IOPS=15.1k, BW=236MiB/s (247MB/s)(464MiB/1966msec) 00:26:51.117 slat (nsec): min=2222, max=33195, avg=2559.01, stdev=843.82 00:26:51.117 clat (usec): min=431, max=7611, avg=1631.60, stdev=1335.29 00:26:51.117 lat (usec): min=433, max=7626, avg=1634.15, stdev=1335.57 00:26:51.117 clat percentiles (usec): 00:26:51.117 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 791], 20.00th=[ 873], 00:26:51.117 | 30.00th=[ 938], 40.00th=[ 1020], 50.00th=[ 1123], 60.00th=[ 1237], 00:26:51.117 | 70.00th=[ 1369], 80.00th=[ 1582], 90.00th=[ 4621], 95.00th=[ 4686], 00:26:51.117 | 99.00th=[ 6128], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7242], 00:26:51.117 | 99.99th=[ 7570] 00:26:51.117 bw ( KiB/s): min=112544, max=120192, per=48.53%, avg=117224.00, stdev=3506.92, samples=4 00:26:51.117 iops : min= 7034, max= 7512, avg=7326.50, stdev=219.18, samples=4 00:26:51.117 write: IOPS=8714, BW=136MiB/s (143MB/s)(239MiB/1752msec); 0 zone resets 00:26:51.117 slat (nsec): min=26233, max=93630, avg=28742.39, stdev=4589.84 00:26:51.117 clat (usec): min=3890, max=18475, avg=11909.93, stdev=1675.08 00:26:51.117 lat (usec): min=3919, max=18503, avg=11938.68, stdev=1674.87 00:26:51.117 clat percentiles (usec): 00:26:51.117 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10552], 00:26:51.117 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:26:51.117 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[14615], 00:26:51.117 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17433], 99.95th=[18220], 00:26:51.117 | 99.99th=[18482] 00:26:51.117 bw ( KiB/s): min=119808, max=124064, per=87.60%, avg=122144.00, stdev=1764.94, samples=4 00:26:51.117 iops : min= 7488, max= 7754, avg=7634.00, stdev=110.31, samples=4 00:26:51.117 lat (usec) : 500=0.01%, 750=3.72%, 1000=21.19% 00:26:51.117 lat (msec) : 2=30.87%, 4=2.06%, 10=11.83%, 20=30.31% 00:26:51.117 cpu : usr=96.46%, sys=1.70%, ctx=204, majf=0, minf=1 00:26:51.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:51.117 issued rwts: total=29682,15268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:51.117 00:26:51.117 Run status group 0 (all jobs): 00:26:51.117 READ: bw=236MiB/s (247MB/s), 236MiB/s-236MiB/s (247MB/s-247MB/s), io=464MiB (486MB), run=1966-1966msec 00:26:51.117 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=239MiB (250MB), run=1752-1752msec 00:26:51.117 11:54:21 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.117 11:54:21 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:51.117 11:54:21 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:51.117 11:54:21 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:51.117 11:54:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:51.117 11:54:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:51.117 11:54:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:51.117 11:54:21 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:51.117 11:54:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:51.376 11:54:21 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:51.376 11:54:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:26:51.376 11:54:21 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:26:54.665 Nvme0n1 00:26:54.665 11:54:24 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:59.930 11:54:30 -- host/fio.sh@53 -- # ls_guid=22349ca3-3f51-40d2-ac1e-a0167b85f8bc 00:26:59.930 11:54:30 -- host/fio.sh@54 -- # get_lvs_free_mb 22349ca3-3f51-40d2-ac1e-a0167b85f8bc 00:26:59.930 11:54:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=22349ca3-3f51-40d2-ac1e-a0167b85f8bc 00:26:59.930 11:54:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:59.930 11:54:30 -- common/autotest_common.sh@1355 -- # local fc 00:26:59.930 11:54:30 -- common/autotest_common.sh@1356 -- # local cs 00:26:59.930 11:54:30 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:59.930 11:54:30 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:59.930 { 00:26:59.930 "uuid": "22349ca3-3f51-40d2-ac1e-a0167b85f8bc", 00:26:59.930 "name": "lvs_0", 00:26:59.930 "base_bdev": "Nvme0n1", 00:26:59.930 "total_data_clusters": 1862, 00:26:59.930 "free_clusters": 1862, 00:26:59.930 "block_size": 512, 00:26:59.930 "cluster_size": 1073741824 00:26:59.930 } 00:26:59.930 ]' 00:26:59.930 11:54:30 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="22349ca3-3f51-40d2-ac1e-a0167b85f8bc") .free_clusters' 00:26:59.930 11:54:30 -- common/autotest_common.sh@1358 -- # fc=1862 00:26:59.930 11:54:30 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="22349ca3-3f51-40d2-ac1e-a0167b85f8bc") .cluster_size' 00:27:00.186 11:54:30 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:00.186 11:54:30 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:27:00.186 11:54:30 -- common/autotest_common.sh@1363 -- # echo 1906688 00:27:00.186 1906688 00:27:00.186 11:54:30 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:00.443 e1a6b7d7-55e7-4b3f-85ce-eb1750f4fac6 00:27:00.700 11:54:31 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:00.700 11:54:31 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:00.956 11:54:31 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:01.212 11:54:31 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:01.212 11:54:31 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:01.212 11:54:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:01.212 11:54:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:01.212 11:54:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:01.212 11:54:31 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.212 11:54:31 -- common/autotest_common.sh@1330 -- # shift 00:27:01.212 11:54:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:01.212 11:54:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.212 11:54:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.212 11:54:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:01.212 11:54:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:01.212 11:54:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:01.212 11:54:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:01.212 11:54:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.212 11:54:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:01.213 11:54:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:01.213 11:54:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:01.213 11:54:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:01.213 11:54:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:01.213 11:54:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:01.213 11:54:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:01.468 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:01.469 fio-3.35 00:27:01.469 Starting 1 thread 00:27:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.998 00:27:03.998 test: (groupid=0, jobs=1): err= 0: pid=3876066: Tue Dec 3 11:54:34 2024 00:27:03.998 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(79.4MiB/2004msec) 00:27:03.998 slat (nsec): min=1335, max=17767, avg=1467.53, stdev=341.33 00:27:03.998 clat (usec): min=208, max=332924, avg=6257.13, stdev=18467.06 00:27:03.998 lat (usec): min=210, max=332926, avg=6258.60, stdev=18467.10 00:27:03.998 clat percentiles (msec): 00:27:03.998 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:03.998 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:03.998 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:03.998 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:27:03.998 | 99.99th=[ 334] 00:27:03.998 bw ( KiB/s): min=15120, max=49192, per=99.93%, avg=40526.00, stdev=16938.76, samples=4 00:27:03.998 iops : min= 3780, max=12298, avg=10131.50, stdev=4234.69, samples=4 00:27:03.998 write: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(79.5MiB/2004msec); 0 zone resets 00:27:03.998 slat (nsec): min=1370, max=17048, avg=1581.01, stdev=279.56 00:27:03.998 clat (usec): min=176, max=333246, avg=6229.50, stdev=17943.87 00:27:03.998 lat (usec): min=178, max=333248, avg=6231.08, stdev=17943.92 00:27:03.998 clat percentiles (msec): 00:27:03.998 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:03.998 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:03.998 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:03.998 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:27:03.998 | 99.99th=[ 334] 00:27:03.998 bw ( KiB/s): min=15800, max=48856, per=99.85%, avg=40566.00, stdev=16510.74, samples=4 00:27:03.998 iops : min= 3950, max=12214, avg=10141.50, stdev=4127.68, samples=4 00:27:03.998 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:03.998 lat (msec) : 2=0.04%, 4=0.24%, 10=99.36%, 500=0.31% 00:27:03.998 cpu : usr=99.60%, sys=0.05%, ctx=15, majf=0, minf=2 00:27:03.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:03.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:03.998 issued rwts: total=20318,20355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:03.998 00:27:03.998 Run status group 0 (all jobs): 00:27:03.998 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=79.4MiB (83.2MB), run=2004-2004msec 00:27:03.998 WRITE: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=79.5MiB (83.4MB), run=2004-2004msec 00:27:03.998 11:54:34 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:04.256 11:54:34 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:05.187 11:54:35 -- host/fio.sh@64 -- # ls_nested_guid=d3715a22-fb87-4656-938c-650b5bb225c8 00:27:05.187 11:54:35 -- host/fio.sh@65 -- # get_lvs_free_mb d3715a22-fb87-4656-938c-650b5bb225c8 00:27:05.187 11:54:35 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d3715a22-fb87-4656-938c-650b5bb225c8 00:27:05.187 11:54:35 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:05.187 11:54:35 -- common/autotest_common.sh@1355 -- # local fc 00:27:05.187 11:54:35 -- common/autotest_common.sh@1356 -- # local cs 00:27:05.187 11:54:35 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:05.445 11:54:35 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:05.445 { 00:27:05.445 "uuid": "22349ca3-3f51-40d2-ac1e-a0167b85f8bc", 00:27:05.445 "name": "lvs_0", 00:27:05.445 "base_bdev": "Nvme0n1", 00:27:05.445 "total_data_clusters": 1862, 00:27:05.445 "free_clusters": 0, 00:27:05.445 "block_size": 512, 00:27:05.445 "cluster_size": 1073741824 00:27:05.445 }, 00:27:05.445 { 00:27:05.445 "uuid": "d3715a22-fb87-4656-938c-650b5bb225c8", 00:27:05.445 "name": "lvs_n_0", 00:27:05.445 "base_bdev": "e1a6b7d7-55e7-4b3f-85ce-eb1750f4fac6", 00:27:05.445 "total_data_clusters": 476206, 00:27:05.445 "free_clusters": 476206, 00:27:05.445 "block_size": 512, 00:27:05.445 "cluster_size": 4194304 00:27:05.445 } 00:27:05.445 ]' 00:27:05.445 11:54:35 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d3715a22-fb87-4656-938c-650b5bb225c8") .free_clusters' 00:27:05.445 11:54:36 -- common/autotest_common.sh@1358 -- # fc=476206 00:27:05.445 11:54:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d3715a22-fb87-4656-938c-650b5bb225c8") .cluster_size' 00:27:05.702 11:54:36 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:05.702 11:54:36 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:27:05.702 11:54:36 -- common/autotest_common.sh@1363 -- # echo 1904824 00:27:05.702 1904824 00:27:05.702 11:54:36 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:06.636 520638c5-ce43-4bb0-bda9-914e0ba826cc 00:27:06.636 11:54:36 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:06.636 11:54:37 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:06.895 11:54:37 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:07.187 11:54:37 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:07.187 11:54:37 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:07.187 11:54:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:07.187 11:54:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:07.187 11:54:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:07.187 11:54:37 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:07.187 11:54:37 -- common/autotest_common.sh@1330 -- # shift 00:27:07.187 11:54:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:07.187 11:54:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:07.187 11:54:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:07.187 11:54:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:07.187 11:54:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:07.187 11:54:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:07.187 11:54:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:07.187 11:54:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:07.448 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:07.448 fio-3.35 00:27:07.448 Starting 1 thread 00:27:07.448 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.090 00:27:10.090 test: (groupid=0, jobs=1): err= 0: pid=3877279: Tue Dec 3 11:54:40 2024 00:27:10.090 read: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(83.8MiB/2006msec) 00:27:10.090 slat (nsec): min=1349, max=17188, avg=1465.21, stdev=218.58 00:27:10.090 clat (usec): min=3009, max=10395, avg=5912.05, stdev=201.55 00:27:10.090 lat (usec): min=3011, max=10396, avg=5913.52, stdev=201.53 00:27:10.090 clat percentiles (usec): 00:27:10.090 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:10.090 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:10.090 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5997], 00:27:10.090 | 99.00th=[ 6587], 99.50th=[ 6587], 99.90th=[ 8848], 99.95th=[ 9634], 00:27:10.090 | 99.99th=[10421] 00:27:10.091 bw ( KiB/s): min=41040, max=43600, per=100.00%, avg=42772.00, stdev=1176.22, samples=4 00:27:10.091 iops : min=10260, max=10900, avg=10693.00, stdev=294.05, samples=4 00:27:10.091 write: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.7MiB/2006msec); 0 zone resets 00:27:10.091 slat (nsec): min=1378, max=17361, avg=1549.09, stdev=228.75 00:27:10.091 clat (usec): min=3010, max=10389, avg=5929.98, stdev=205.23 00:27:10.091 lat (usec): min=3013, max=10390, avg=5931.53, stdev=205.21 00:27:10.091 clat percentiles (usec): 00:27:10.091 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:10.091 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:10.091 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:10.091 | 99.00th=[ 6587], 99.50th=[ 6652], 99.90th=[ 8848], 99.95th=[ 9634], 00:27:10.091 | 99.99th=[10421] 00:27:10.091 bw ( KiB/s): min=41528, max=43176, per=99.99%, avg=42704.00, stdev=787.01, samples=4 00:27:10.091 iops : min=10382, max=10794, avg=10676.00, stdev=196.75, samples=4 00:27:10.091 lat (msec) : 4=0.04%, 10=99.93%, 20=0.03% 00:27:10.091 cpu : usr=99.45%, sys=0.15%, ctx=16, majf=0, minf=2 00:27:10.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:10.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:10.091 issued rwts: total=21446,21419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:10.091 00:27:10.091 Run status group 0 (all jobs): 00:27:10.091 READ: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=83.8MiB (87.8MB), run=2006-2006msec 00:27:10.091 WRITE: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.7MiB (87.7MB), run=2006-2006msec 00:27:10.091 11:54:40 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:10.091 11:54:40 -- host/fio.sh@74 -- # sync 00:27:10.091 11:54:40 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:18.203 11:54:47 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:18.203 11:54:47 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:23.468 11:54:53 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:23.468 11:54:53 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:26.753 11:54:56 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:26.753 11:54:56 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:26.753 11:54:56 -- host/fio.sh@86 -- # nvmftestfini 00:27:26.753 11:54:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:26.753 11:54:56 -- nvmf/common.sh@116 -- # sync 00:27:26.753 11:54:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:26.753 11:54:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:26.753 11:54:56 -- nvmf/common.sh@119 -- # set +e 00:27:26.753 11:54:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:26.753 11:54:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:26.753 rmmod nvme_rdma 00:27:26.753 rmmod nvme_fabrics 00:27:26.753 11:54:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:26.753 11:54:56 -- nvmf/common.sh@123 -- # set -e 00:27:26.753 11:54:56 -- nvmf/common.sh@124 -- # return 0 00:27:26.753 11:54:56 -- nvmf/common.sh@477 -- # '[' -n 3872618 ']' 00:27:26.753 11:54:56 -- nvmf/common.sh@478 -- # killprocess 3872618 00:27:26.753 11:54:56 -- common/autotest_common.sh@936 -- # '[' -z 3872618 ']' 00:27:26.753 11:54:56 -- common/autotest_common.sh@940 -- # kill -0 3872618 00:27:26.753 11:54:56 -- common/autotest_common.sh@941 -- # uname 00:27:26.753 11:54:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:26.753 11:54:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3872618 00:27:26.753 11:54:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:26.753 11:54:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:26.753 11:54:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3872618' 00:27:26.753 killing process with pid 3872618 00:27:26.753 11:54:56 -- common/autotest_common.sh@955 -- # kill 3872618 00:27:26.753 11:54:56 -- common/autotest_common.sh@960 -- # wait 3872618 00:27:26.753 11:54:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:26.753 11:54:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:26.753 00:27:26.753 real 0m50.218s 00:27:26.753 user 3m37.236s 00:27:26.753 sys 0m7.791s 00:27:26.753 11:54:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:26.753 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:27:26.753 ************************************ 00:27:26.753 END TEST nvmf_fio_host 00:27:26.753 ************************************ 00:27:26.753 11:54:57 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:26.753 11:54:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:26.753 11:54:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.753 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:27:26.754 ************************************ 00:27:26.754 START TEST nvmf_failover 00:27:26.754 ************************************ 00:27:26.754 11:54:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:26.754 * Looking for test storage... 00:27:26.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:26.754 11:54:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:26.754 11:54:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:26.754 11:54:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:26.754 11:54:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:26.754 11:54:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:26.754 11:54:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:26.754 11:54:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:26.754 11:54:57 -- scripts/common.sh@335 -- # IFS=.-: 00:27:26.754 11:54:57 -- scripts/common.sh@335 -- # read -ra ver1 00:27:26.754 11:54:57 -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.754 11:54:57 -- scripts/common.sh@336 -- # read -ra ver2 00:27:26.754 11:54:57 -- scripts/common.sh@337 -- # local 'op=<' 00:27:26.754 11:54:57 -- scripts/common.sh@339 -- # ver1_l=2 00:27:26.754 11:54:57 -- scripts/common.sh@340 -- # ver2_l=1 00:27:26.754 11:54:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:26.754 11:54:57 -- scripts/common.sh@343 -- # case "$op" in 00:27:26.754 11:54:57 -- scripts/common.sh@344 -- # : 1 00:27:26.754 11:54:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:26.754 11:54:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.754 11:54:57 -- scripts/common.sh@364 -- # decimal 1 00:27:26.754 11:54:57 -- scripts/common.sh@352 -- # local d=1 00:27:26.754 11:54:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.754 11:54:57 -- scripts/common.sh@354 -- # echo 1 00:27:26.754 11:54:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:26.754 11:54:57 -- scripts/common.sh@365 -- # decimal 2 00:27:26.754 11:54:57 -- scripts/common.sh@352 -- # local d=2 00:27:26.754 11:54:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.754 11:54:57 -- scripts/common.sh@354 -- # echo 2 00:27:26.754 11:54:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:26.754 11:54:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:26.754 11:54:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:26.754 11:54:57 -- scripts/common.sh@367 -- # return 0 00:27:26.754 11:54:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.754 11:54:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.754 --rc genhtml_branch_coverage=1 00:27:26.754 --rc genhtml_function_coverage=1 00:27:26.754 --rc genhtml_legend=1 00:27:26.754 --rc geninfo_all_blocks=1 00:27:26.754 --rc geninfo_unexecuted_blocks=1 00:27:26.754 00:27:26.754 ' 00:27:26.754 11:54:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.754 --rc genhtml_branch_coverage=1 00:27:26.754 --rc genhtml_function_coverage=1 00:27:26.754 --rc genhtml_legend=1 00:27:26.754 --rc geninfo_all_blocks=1 00:27:26.754 --rc geninfo_unexecuted_blocks=1 00:27:26.754 00:27:26.754 ' 00:27:26.754 11:54:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.754 --rc genhtml_branch_coverage=1 00:27:26.754 --rc genhtml_function_coverage=1 00:27:26.754 --rc genhtml_legend=1 00:27:26.754 --rc geninfo_all_blocks=1 00:27:26.754 --rc geninfo_unexecuted_blocks=1 00:27:26.754 00:27:26.754 ' 00:27:26.754 11:54:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.754 --rc genhtml_branch_coverage=1 00:27:26.754 --rc genhtml_function_coverage=1 00:27:26.754 --rc genhtml_legend=1 00:27:26.754 --rc geninfo_all_blocks=1 00:27:26.754 --rc geninfo_unexecuted_blocks=1 00:27:26.754 00:27:26.754 ' 00:27:26.754 11:54:57 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.754 11:54:57 -- nvmf/common.sh@7 -- # uname -s 00:27:26.754 11:54:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.754 11:54:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.754 11:54:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.754 11:54:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.754 11:54:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.754 11:54:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.754 11:54:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.754 11:54:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.754 11:54:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.754 11:54:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.754 11:54:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:26.754 11:54:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:26.754 11:54:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.754 11:54:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.754 11:54:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.754 11:54:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:26.754 11:54:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.754 11:54:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.754 11:54:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.754 11:54:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.754 11:54:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.754 11:54:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.754 11:54:57 -- paths/export.sh@5 -- # export PATH 00:27:26.754 11:54:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.754 11:54:57 -- nvmf/common.sh@46 -- # : 0 00:27:26.754 11:54:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:26.754 11:54:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:26.754 11:54:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:26.754 11:54:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.754 11:54:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.754 11:54:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:26.754 11:54:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:26.754 11:54:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:26.754 11:54:57 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.754 11:54:57 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.754 11:54:57 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:26.754 11:54:57 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:26.754 11:54:57 -- host/failover.sh@18 -- # nvmftestinit 00:27:26.754 11:54:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:26.754 11:54:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.754 11:54:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:26.754 11:54:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:26.754 11:54:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:26.755 11:54:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.755 11:54:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.755 11:54:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.755 11:54:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:26.755 11:54:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:26.755 11:54:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:26.755 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:27:34.868 11:55:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:34.868 11:55:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:34.868 11:55:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:34.868 11:55:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:34.868 11:55:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:34.868 11:55:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:34.868 11:55:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:34.868 11:55:03 -- nvmf/common.sh@294 -- # net_devs=() 00:27:34.868 11:55:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:34.868 11:55:03 -- nvmf/common.sh@295 -- # e810=() 00:27:34.868 11:55:03 -- nvmf/common.sh@295 -- # local -ga e810 00:27:34.868 11:55:03 -- nvmf/common.sh@296 -- # x722=() 00:27:34.868 11:55:03 -- nvmf/common.sh@296 -- # local -ga x722 00:27:34.868 11:55:03 -- nvmf/common.sh@297 -- # mlx=() 00:27:34.868 11:55:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:34.868 11:55:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.868 11:55:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:34.868 11:55:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:34.868 11:55:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:34.868 11:55:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:34.868 11:55:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:34.868 11:55:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:34.868 11:55:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:34.868 11:55:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:34.868 11:55:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:34.869 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:34.869 11:55:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:34.869 11:55:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:34.869 11:55:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:34.869 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:34.869 11:55:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:34.869 11:55:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:34.869 11:55:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:34.869 11:55:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.869 11:55:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:34.869 11:55:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.869 11:55:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:34.869 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:34.869 11:55:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.869 11:55:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:34.869 11:55:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.869 11:55:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:34.869 11:55:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.869 11:55:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:34.869 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:34.869 11:55:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.869 11:55:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:34.869 11:55:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:34.869 11:55:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:34.869 11:55:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:34.869 11:55:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:34.869 11:55:03 -- nvmf/common.sh@57 -- # uname 00:27:34.869 11:55:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:34.869 11:55:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:34.869 11:55:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:34.869 11:55:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:34.869 11:55:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:34.869 11:55:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:34.869 11:55:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:34.869 11:55:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:34.869 11:55:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:34.869 11:55:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:34.869 11:55:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:34.869 11:55:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:34.869 11:55:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:34.869 11:55:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:34.869 11:55:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:34.869 11:55:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:34.869 11:55:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@104 -- # continue 2 00:27:34.869 11:55:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@104 -- # continue 2 00:27:34.869 11:55:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:34.869 11:55:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:34.869 11:55:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:34.869 11:55:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:34.869 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:34.869 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:34.869 altname enp217s0f0np0 00:27:34.869 altname ens818f0np0 00:27:34.869 inet 192.168.100.8/24 scope global mlx_0_0 00:27:34.869 valid_lft forever preferred_lft forever 00:27:34.869 11:55:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:34.869 11:55:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:34.869 11:55:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:34.869 11:55:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:34.869 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:34.869 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:34.869 altname enp217s0f1np1 00:27:34.869 altname ens818f1np1 00:27:34.869 inet 192.168.100.9/24 scope global mlx_0_1 00:27:34.869 valid_lft forever preferred_lft forever 00:27:34.869 11:55:04 -- nvmf/common.sh@410 -- # return 0 00:27:34.869 11:55:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:34.869 11:55:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:34.869 11:55:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:34.869 11:55:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:34.869 11:55:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:34.869 11:55:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:34.869 11:55:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:34.869 11:55:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:34.869 11:55:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:34.869 11:55:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@104 -- # continue 2 00:27:34.869 11:55:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:34.869 11:55:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:34.869 11:55:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@104 -- # continue 2 00:27:34.869 11:55:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:34.869 11:55:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:34.869 11:55:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:34.869 11:55:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:34.869 11:55:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:34.869 11:55:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:34.869 192.168.100.9' 00:27:34.869 11:55:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:34.869 192.168.100.9' 00:27:34.869 11:55:04 -- nvmf/common.sh@445 -- # head -n 1 00:27:34.869 11:55:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:34.869 11:55:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:34.869 192.168.100.9' 00:27:34.869 11:55:04 -- nvmf/common.sh@446 -- # tail -n +2 00:27:34.869 11:55:04 -- nvmf/common.sh@446 -- # head -n 1 00:27:34.869 11:55:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:34.869 11:55:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:34.869 11:55:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:34.869 11:55:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:34.869 11:55:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:34.869 11:55:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:34.869 11:55:04 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:34.869 11:55:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:34.869 11:55:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.869 11:55:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.869 11:55:04 -- nvmf/common.sh@469 -- # nvmfpid=3883688 00:27:34.869 11:55:04 -- nvmf/common.sh@470 -- # waitforlisten 3883688 00:27:34.869 11:55:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.869 11:55:04 -- common/autotest_common.sh@829 -- # '[' -z 3883688 ']' 00:27:34.869 11:55:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.869 11:55:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.869 11:55:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.869 11:55:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.869 11:55:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.869 [2024-12-03 11:55:04.256324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:34.869 [2024-12-03 11:55:04.256370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.869 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.869 [2024-12-03 11:55:04.324721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.869 [2024-12-03 11:55:04.396060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:34.869 [2024-12-03 11:55:04.396179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.869 [2024-12-03 11:55:04.396190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.869 [2024-12-03 11:55:04.396199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.869 [2024-12-03 11:55:04.396297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.869 [2024-12-03 11:55:04.396383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.869 [2024-12-03 11:55:04.396385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.869 11:55:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.869 11:55:05 -- common/autotest_common.sh@862 -- # return 0 00:27:34.869 11:55:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:34.869 11:55:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.869 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:27:34.869 11:55:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.869 11:55:05 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:34.869 [2024-12-03 11:55:05.298788] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18c8860/0x18ccd50) succeed. 00:27:34.869 [2024-12-03 11:55:05.307893] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18c9db0/0x190e3f0) succeed. 00:27:34.869 11:55:05 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:35.126 Malloc0 00:27:35.126 11:55:05 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.383 11:55:05 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.640 11:55:05 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:35.640 [2024-12-03 11:55:06.160397] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:35.640 11:55:06 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:35.896 [2024-12-03 11:55:06.356789] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:35.896 11:55:06 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:36.154 [2024-12-03 11:55:06.549517] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:36.154 11:55:06 -- host/failover.sh@31 -- # bdevperf_pid=3884201 00:27:36.154 11:55:06 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:36.154 11:55:06 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.154 11:55:06 -- host/failover.sh@34 -- # waitforlisten 3884201 /var/tmp/bdevperf.sock 00:27:36.154 11:55:06 -- common/autotest_common.sh@829 -- # '[' -z 3884201 ']' 00:27:36.154 11:55:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.154 11:55:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.154 11:55:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.154 11:55:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.154 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:27:37.088 11:55:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.088 11:55:07 -- common/autotest_common.sh@862 -- # return 0 00:27:37.088 11:55:07 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:37.346 NVMe0n1 00:27:37.346 11:55:07 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:37.346 00:27:37.604 11:55:07 -- host/failover.sh@39 -- # run_test_pid=3884390 00:27:37.604 11:55:07 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:37.604 11:55:07 -- host/failover.sh@41 -- # sleep 1 00:27:38.535 11:55:08 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:38.792 11:55:09 -- host/failover.sh@45 -- # sleep 3 00:27:42.073 11:55:12 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.073 00:27:42.073 11:55:12 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:42.073 11:55:12 -- host/failover.sh@50 -- # sleep 3 00:27:45.354 11:55:15 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:45.354 [2024-12-03 11:55:15.771899] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:45.354 11:55:15 -- host/failover.sh@55 -- # sleep 1 00:27:46.289 11:55:16 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:46.547 11:55:16 -- host/failover.sh@59 -- # wait 3884390 00:27:53.116 0 00:27:53.116 11:55:23 -- host/failover.sh@61 -- # killprocess 3884201 00:27:53.116 11:55:23 -- common/autotest_common.sh@936 -- # '[' -z 3884201 ']' 00:27:53.116 11:55:23 -- common/autotest_common.sh@940 -- # kill -0 3884201 00:27:53.116 11:55:23 -- common/autotest_common.sh@941 -- # uname 00:27:53.116 11:55:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:53.116 11:55:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3884201 00:27:53.116 11:55:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:53.116 11:55:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:53.116 11:55:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3884201' 00:27:53.116 killing process with pid 3884201 00:27:53.116 11:55:23 -- common/autotest_common.sh@955 -- # kill 3884201 00:27:53.116 11:55:23 -- common/autotest_common.sh@960 -- # wait 3884201 00:27:53.116 11:55:23 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:53.116 [2024-12-03 11:55:06.624876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:53.116 [2024-12-03 11:55:06.624934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884201 ] 00:27:53.116 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.116 [2024-12-03 11:55:06.695347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.116 [2024-12-03 11:55:06.765772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.116 Running I/O for 15 seconds... 00:27:53.116 [2024-12-03 11:55:10.142128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182400 00:27:53.116 [2024-12-03 11:55:10.142209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182400 00:27:53.116 [2024-12-03 11:55:10.142269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182400 00:27:53.116 [2024-12-03 11:55:10.142348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182400 00:27:53.116 [2024-12-03 11:55:10.142368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.116 [2024-12-03 11:55:10.142540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.116 [2024-12-03 11:55:10.142571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183000 00:27:53.116 [2024-12-03 11:55:10.142580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.142878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.142975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.142985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.142994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.143013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.143092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.143155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.117 [2024-12-03 11:55:10.143195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182400 00:27:53.117 [2024-12-03 11:55:10.143294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.117 [2024-12-03 11:55:10.143305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:27:53.117 [2024-12-03 11:55:10.143316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.143883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.143942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.143990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.118 [2024-12-03 11:55:10.143999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.144010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182400 00:27:53.118 [2024-12-03 11:55:10.144019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.118 [2024-12-03 11:55:10.144029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183000 00:27:53.118 [2024-12-03 11:55:10.144038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182400 00:27:53.119 [2024-12-03 11:55:10.144622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.119 [2024-12-03 11:55:10.144679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.144690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183000 00:27:53.119 [2024-12-03 11:55:10.144699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.146540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.119 [2024-12-03 11:55:10.146557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.119 [2024-12-03 11:55:10.146565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91160 len:8 PRP1 0x0 PRP2 0x0 00:27:53.119 [2024-12-03 11:55:10.146575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.119 [2024-12-03 11:55:10.146619] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:53.119 [2024-12-03 11:55:10.146636] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:53.119 [2024-12-03 11:55:10.146646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.119 [2024-12-03 11:55:10.148482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.119 [2024-12-03 11:55:10.162896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:53.119 [2024-12-03 11:55:10.191127] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:53.119 [2024-12-03 11:55:13.583782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183000 00:27:53.120 [2024-12-03 11:55:13.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.583852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.583872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.583893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.583912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:27:53.120 [2024-12-03 11:55:13.583931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.583951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:27:53.120 [2024-12-03 11:55:13.583971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.583986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.583995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183000 00:27:53.120 [2024-12-03 11:55:13.584100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:27:53.120 [2024-12-03 11:55:13.584390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.120 [2024-12-03 11:55:13.584448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x182f00 00:27:53.120 [2024-12-03 11:55:13.584470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.120 [2024-12-03 11:55:13.584480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.584878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.584978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.584988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.584997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.585017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183000 00:27:53.121 [2024-12-03 11:55:13.585037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.585056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.585076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.585096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.585118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.585138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.585159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182f00 00:27:53.121 [2024-12-03 11:55:13.585179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.121 [2024-12-03 11:55:13.585198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.121 [2024-12-03 11:55:13.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x182f00 00:27:53.122 [2024-12-03 11:55:13.585728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.122 [2024-12-03 11:55:13.585907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.122 [2024-12-03 11:55:13.585939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:27:53.122 [2024-12-03 11:55:13.585948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.585958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.585967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.585978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.585987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.585998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.586046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:13.586132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:13.586172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.586231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.586251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.586271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:13.586310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:13.586332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:13.586351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.586362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x182f00 00:27:53.123 [2024-12-03 11:55:13.586371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.588133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.123 [2024-12-03 11:55:13.588146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.123 [2024-12-03 11:55:13.588155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54576 len:8 PRP1 0x0 PRP2 0x0 00:27:53.123 [2024-12-03 11:55:13.588164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:13.588205] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:53.123 [2024-12-03 11:55:13.588216] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:53.123 [2024-12-03 11:55:13.588227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.123 [2024-12-03 11:55:13.589985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.123 [2024-12-03 11:55:13.604292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:53.123 [2024-12-03 11:55:13.636940] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:53.123 [2024-12-03 11:55:17.976010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:17.976122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:17.976142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183000 00:27:53.123 [2024-12-03 11:55:17.976167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:17.976206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.123 [2024-12-03 11:55:17.976267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182400 00:27:53.123 [2024-12-03 11:55:17.976288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.123 [2024-12-03 11:55:17.976299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182400 00:27:53.124 [2024-12-03 11:55:17.976950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:27:53.124 [2024-12-03 11:55:17.976969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.976980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.976989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.977000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.124 [2024-12-03 11:55:17.977009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.124 [2024-12-03 11:55:17.977020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.125 [2024-12-03 11:55:17.977574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182400 00:27:53.125 [2024-12-03 11:55:17.977633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:27:53.125 [2024-12-03 11:55:17.977673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.125 [2024-12-03 11:55:17.977684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.977693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.977751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.977770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.977791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.977830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.977870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.977909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.977929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.977948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.977968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.977998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183000 00:27:53.126 [2024-12-03 11:55:17.978280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182400 00:27:53.126 [2024-12-03 11:55:17.978360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.126 [2024-12-03 11:55:17.978418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.126 [2024-12-03 11:55:17.978428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182400 00:27:53.127 [2024-12-03 11:55:17.978436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:27:53.127 [2024-12-03 11:55:17.978456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182400 00:27:53.127 [2024-12-03 11:55:17.978476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.127 [2024-12-03 11:55:17.978496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183000 00:27:53.127 [2024-12-03 11:55:17.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:27:53.127 [2024-12-03 11:55:17.978535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183000 00:27:53.127 [2024-12-03 11:55:17.978554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.127 [2024-12-03 11:55:17.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.978584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182400 00:27:53.127 [2024-12-03 11:55:17.978593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:3e970000 sqhd:5310 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.980260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.127 [2024-12-03 11:55:17.980273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.127 [2024-12-03 11:55:17.980282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:27:53.127 [2024-12-03 11:55:17.980291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-03 11:55:17.980330] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:53.127 [2024-12-03 11:55:17.980343] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:53.127 [2024-12-03 11:55:17.980354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.127 [2024-12-03 11:55:17.981923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.127 [2024-12-03 11:55:17.995877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:53.127 [2024-12-03 11:55:18.029425] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:53.127 00:27:53.127 Latency(us) 00:27:53.127 [2024-12-03T10:55:23.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.127 [2024-12-03T10:55:23.741Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.127 Verification LBA range: start 0x0 length 0x4000 00:27:53.127 NVMe0n1 : 15.00 20204.39 78.92 265.86 0.00 6240.08 471.86 1020054.73 00:27:53.127 [2024-12-03T10:55:23.741Z] =================================================================================================================== 00:27:53.127 [2024-12-03T10:55:23.741Z] Total : 20204.39 78.92 265.86 0.00 6240.08 471.86 1020054.73 00:27:53.127 Received shutdown signal, test time was about 15.000000 seconds 00:27:53.127 00:27:53.127 Latency(us) 00:27:53.127 [2024-12-03T10:55:23.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.127 [2024-12-03T10:55:23.741Z] =================================================================================================================== 00:27:53.127 [2024-12-03T10:55:23.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.127 11:55:23 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:53.127 11:55:23 -- host/failover.sh@65 -- # count=3 00:27:53.127 11:55:23 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:53.127 11:55:23 -- host/failover.sh@73 -- # bdevperf_pid=3886965 00:27:53.127 11:55:23 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:53.127 11:55:23 -- host/failover.sh@75 -- # waitforlisten 3886965 /var/tmp/bdevperf.sock 00:27:53.127 11:55:23 -- common/autotest_common.sh@829 -- # '[' -z 3886965 ']' 00:27:53.127 11:55:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.127 11:55:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.127 11:55:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.127 11:55:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.127 11:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.694 11:55:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.694 11:55:24 -- common/autotest_common.sh@862 -- # return 0 00:27:53.694 11:55:24 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:53.953 [2024-12-03 11:55:24.442103] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:53.953 11:55:24 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:54.211 [2024-12-03 11:55:24.630727] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:54.212 11:55:24 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.469 NVMe0n1 00:27:54.469 11:55:24 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.725 00:27:54.725 11:55:25 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.983 00:27:54.983 11:55:25 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.983 11:55:25 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:55.240 11:55:25 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.240 11:55:25 -- host/failover.sh@87 -- # sleep 3 00:27:58.520 11:55:28 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.520 11:55:28 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:58.520 11:55:29 -- host/failover.sh@90 -- # run_test_pid=3887969 00:27:58.520 11:55:29 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:58.520 11:55:29 -- host/failover.sh@92 -- # wait 3887969 00:27:59.894 0 00:27:59.894 11:55:30 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.894 [2024-12-03 11:55:23.460644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:59.894 [2024-12-03 11:55:23.460700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886965 ] 00:27:59.894 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.894 [2024-12-03 11:55:23.530341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.894 [2024-12-03 11:55:23.592929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.894 [2024-12-03 11:55:25.770230] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:59.894 [2024-12-03 11:55:25.770874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.894 [2024-12-03 11:55:25.770900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.894 [2024-12-03 11:55:25.790436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:59.894 [2024-12-03 11:55:25.806749] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:59.894 Running I/O for 1 seconds... 00:27:59.894 00:27:59.894 Latency(us) 00:27:59.894 [2024-12-03T10:55:30.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.894 [2024-12-03T10:55:30.508Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:59.894 Verification LBA range: start 0x0 length 0x4000 00:27:59.894 NVMe0n1 : 1.00 25194.04 98.41 0.00 0.00 5057.35 891.29 13159.63 00:27:59.894 [2024-12-03T10:55:30.508Z] =================================================================================================================== 00:27:59.894 [2024-12-03T10:55:30.508Z] Total : 25194.04 98.41 0.00 0.00 5057.35 891.29 13159.63 00:27:59.894 11:55:30 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.894 11:55:30 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:59.894 11:55:30 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.153 11:55:30 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.153 11:55:30 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:00.153 11:55:30 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.410 11:55:30 -- host/failover.sh@101 -- # sleep 3 00:28:03.799 11:55:33 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.799 11:55:33 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:03.799 11:55:34 -- host/failover.sh@108 -- # killprocess 3886965 00:28:03.799 11:55:34 -- common/autotest_common.sh@936 -- # '[' -z 3886965 ']' 00:28:03.799 11:55:34 -- common/autotest_common.sh@940 -- # kill -0 3886965 00:28:03.799 11:55:34 -- common/autotest_common.sh@941 -- # uname 00:28:03.799 11:55:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:03.799 11:55:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3886965 00:28:03.799 11:55:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:03.799 11:55:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:03.799 11:55:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3886965' 00:28:03.799 killing process with pid 3886965 00:28:03.799 11:55:34 -- common/autotest_common.sh@955 -- # kill 3886965 00:28:03.799 11:55:34 -- common/autotest_common.sh@960 -- # wait 3886965 00:28:03.799 11:55:34 -- host/failover.sh@110 -- # sync 00:28:03.799 11:55:34 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.057 11:55:34 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:04.057 11:55:34 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.057 11:55:34 -- host/failover.sh@116 -- # nvmftestfini 00:28:04.057 11:55:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:04.057 11:55:34 -- nvmf/common.sh@116 -- # sync 00:28:04.057 11:55:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:04.058 11:55:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:04.058 11:55:34 -- nvmf/common.sh@119 -- # set +e 00:28:04.058 11:55:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:04.058 11:55:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:04.058 rmmod nvme_rdma 00:28:04.058 rmmod nvme_fabrics 00:28:04.058 11:55:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:04.058 11:55:34 -- nvmf/common.sh@123 -- # set -e 00:28:04.058 11:55:34 -- nvmf/common.sh@124 -- # return 0 00:28:04.058 11:55:34 -- nvmf/common.sh@477 -- # '[' -n 3883688 ']' 00:28:04.058 11:55:34 -- nvmf/common.sh@478 -- # killprocess 3883688 00:28:04.058 11:55:34 -- common/autotest_common.sh@936 -- # '[' -z 3883688 ']' 00:28:04.058 11:55:34 -- common/autotest_common.sh@940 -- # kill -0 3883688 00:28:04.058 11:55:34 -- common/autotest_common.sh@941 -- # uname 00:28:04.058 11:55:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.058 11:55:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3883688 00:28:04.316 11:55:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:04.316 11:55:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:04.316 11:55:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3883688' 00:28:04.316 killing process with pid 3883688 00:28:04.316 11:55:34 -- common/autotest_common.sh@955 -- # kill 3883688 00:28:04.316 11:55:34 -- common/autotest_common.sh@960 -- # wait 3883688 00:28:04.575 11:55:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:04.575 11:55:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:04.575 00:28:04.575 real 0m37.858s 00:28:04.575 user 2m5.199s 00:28:04.575 sys 0m7.707s 00:28:04.575 11:55:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:04.575 11:55:34 -- common/autotest_common.sh@10 -- # set +x 00:28:04.575 ************************************ 00:28:04.575 END TEST nvmf_failover 00:28:04.575 ************************************ 00:28:04.575 11:55:34 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:04.575 11:55:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:04.575 11:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:04.575 11:55:34 -- common/autotest_common.sh@10 -- # set +x 00:28:04.575 ************************************ 00:28:04.575 START TEST nvmf_discovery 00:28:04.575 ************************************ 00:28:04.575 11:55:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:04.575 * Looking for test storage... 00:28:04.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:04.575 11:55:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:04.575 11:55:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:04.575 11:55:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:04.575 11:55:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:04.575 11:55:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:04.575 11:55:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:04.575 11:55:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:04.575 11:55:35 -- scripts/common.sh@335 -- # IFS=.-: 00:28:04.575 11:55:35 -- scripts/common.sh@335 -- # read -ra ver1 00:28:04.575 11:55:35 -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.575 11:55:35 -- scripts/common.sh@336 -- # read -ra ver2 00:28:04.575 11:55:35 -- scripts/common.sh@337 -- # local 'op=<' 00:28:04.575 11:55:35 -- scripts/common.sh@339 -- # ver1_l=2 00:28:04.575 11:55:35 -- scripts/common.sh@340 -- # ver2_l=1 00:28:04.575 11:55:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:04.575 11:55:35 -- scripts/common.sh@343 -- # case "$op" in 00:28:04.575 11:55:35 -- scripts/common.sh@344 -- # : 1 00:28:04.575 11:55:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:04.575 11:55:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.575 11:55:35 -- scripts/common.sh@364 -- # decimal 1 00:28:04.575 11:55:35 -- scripts/common.sh@352 -- # local d=1 00:28:04.575 11:55:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.575 11:55:35 -- scripts/common.sh@354 -- # echo 1 00:28:04.575 11:55:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:04.575 11:55:35 -- scripts/common.sh@365 -- # decimal 2 00:28:04.575 11:55:35 -- scripts/common.sh@352 -- # local d=2 00:28:04.575 11:55:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.575 11:55:35 -- scripts/common.sh@354 -- # echo 2 00:28:04.575 11:55:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:04.575 11:55:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:04.575 11:55:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:04.575 11:55:35 -- scripts/common.sh@367 -- # return 0 00:28:04.575 11:55:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.575 11:55:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.575 --rc genhtml_branch_coverage=1 00:28:04.575 --rc genhtml_function_coverage=1 00:28:04.575 --rc genhtml_legend=1 00:28:04.575 --rc geninfo_all_blocks=1 00:28:04.575 --rc geninfo_unexecuted_blocks=1 00:28:04.575 00:28:04.575 ' 00:28:04.575 11:55:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.575 --rc genhtml_branch_coverage=1 00:28:04.575 --rc genhtml_function_coverage=1 00:28:04.575 --rc genhtml_legend=1 00:28:04.575 --rc geninfo_all_blocks=1 00:28:04.575 --rc geninfo_unexecuted_blocks=1 00:28:04.575 00:28:04.575 ' 00:28:04.575 11:55:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.575 --rc genhtml_branch_coverage=1 00:28:04.575 --rc genhtml_function_coverage=1 00:28:04.575 --rc genhtml_legend=1 00:28:04.575 --rc geninfo_all_blocks=1 00:28:04.575 --rc geninfo_unexecuted_blocks=1 00:28:04.575 00:28:04.575 ' 00:28:04.575 11:55:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.575 --rc genhtml_branch_coverage=1 00:28:04.575 --rc genhtml_function_coverage=1 00:28:04.575 --rc genhtml_legend=1 00:28:04.575 --rc geninfo_all_blocks=1 00:28:04.575 --rc geninfo_unexecuted_blocks=1 00:28:04.575 00:28:04.575 ' 00:28:04.575 11:55:35 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.833 11:55:35 -- nvmf/common.sh@7 -- # uname -s 00:28:04.833 11:55:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.833 11:55:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.833 11:55:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.833 11:55:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.833 11:55:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.833 11:55:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.833 11:55:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.833 11:55:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.833 11:55:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.833 11:55:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.833 11:55:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:04.833 11:55:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:04.833 11:55:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.833 11:55:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.833 11:55:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.833 11:55:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:04.833 11:55:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.833 11:55:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.833 11:55:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.833 11:55:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.833 11:55:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.833 11:55:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.833 11:55:35 -- paths/export.sh@5 -- # export PATH 00:28:04.833 11:55:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.833 11:55:35 -- nvmf/common.sh@46 -- # : 0 00:28:04.833 11:55:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:04.833 11:55:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:04.833 11:55:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:04.833 11:55:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.833 11:55:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.833 11:55:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:04.833 11:55:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:04.833 11:55:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:04.833 11:55:35 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:04.833 11:55:35 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:04.833 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:04.833 11:55:35 -- host/discovery.sh@13 -- # exit 0 00:28:04.833 00:28:04.833 real 0m0.209s 00:28:04.833 user 0m0.132s 00:28:04.833 sys 0m0.092s 00:28:04.833 11:55:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:04.833 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:04.833 ************************************ 00:28:04.833 END TEST nvmf_discovery 00:28:04.833 ************************************ 00:28:04.833 11:55:35 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:04.833 11:55:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:04.833 11:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:04.833 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:04.834 ************************************ 00:28:04.834 START TEST nvmf_discovery_remove_ifc 00:28:04.834 ************************************ 00:28:04.834 11:55:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:04.834 * Looking for test storage... 00:28:04.834 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:04.834 11:55:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:04.834 11:55:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:04.834 11:55:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:04.834 11:55:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:04.834 11:55:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:04.834 11:55:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:04.834 11:55:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:04.834 11:55:35 -- scripts/common.sh@335 -- # IFS=.-: 00:28:04.834 11:55:35 -- scripts/common.sh@335 -- # read -ra ver1 00:28:04.834 11:55:35 -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.834 11:55:35 -- scripts/common.sh@336 -- # read -ra ver2 00:28:04.834 11:55:35 -- scripts/common.sh@337 -- # local 'op=<' 00:28:04.834 11:55:35 -- scripts/common.sh@339 -- # ver1_l=2 00:28:04.834 11:55:35 -- scripts/common.sh@340 -- # ver2_l=1 00:28:04.834 11:55:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:04.834 11:55:35 -- scripts/common.sh@343 -- # case "$op" in 00:28:04.834 11:55:35 -- scripts/common.sh@344 -- # : 1 00:28:04.834 11:55:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:04.834 11:55:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.834 11:55:35 -- scripts/common.sh@364 -- # decimal 1 00:28:04.834 11:55:35 -- scripts/common.sh@352 -- # local d=1 00:28:04.834 11:55:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.834 11:55:35 -- scripts/common.sh@354 -- # echo 1 00:28:04.834 11:55:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:05.092 11:55:35 -- scripts/common.sh@365 -- # decimal 2 00:28:05.092 11:55:35 -- scripts/common.sh@352 -- # local d=2 00:28:05.092 11:55:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.092 11:55:35 -- scripts/common.sh@354 -- # echo 2 00:28:05.092 11:55:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:05.092 11:55:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:05.092 11:55:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:05.092 11:55:35 -- scripts/common.sh@367 -- # return 0 00:28:05.092 11:55:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.092 11:55:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:05.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.092 --rc genhtml_branch_coverage=1 00:28:05.092 --rc genhtml_function_coverage=1 00:28:05.092 --rc genhtml_legend=1 00:28:05.092 --rc geninfo_all_blocks=1 00:28:05.092 --rc geninfo_unexecuted_blocks=1 00:28:05.092 00:28:05.092 ' 00:28:05.092 11:55:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:05.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.092 --rc genhtml_branch_coverage=1 00:28:05.092 --rc genhtml_function_coverage=1 00:28:05.092 --rc genhtml_legend=1 00:28:05.092 --rc geninfo_all_blocks=1 00:28:05.092 --rc geninfo_unexecuted_blocks=1 00:28:05.092 00:28:05.092 ' 00:28:05.092 11:55:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:05.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.092 --rc genhtml_branch_coverage=1 00:28:05.092 --rc genhtml_function_coverage=1 00:28:05.092 --rc genhtml_legend=1 00:28:05.092 --rc geninfo_all_blocks=1 00:28:05.092 --rc geninfo_unexecuted_blocks=1 00:28:05.092 00:28:05.092 ' 00:28:05.092 11:55:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:05.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.092 --rc genhtml_branch_coverage=1 00:28:05.092 --rc genhtml_function_coverage=1 00:28:05.092 --rc genhtml_legend=1 00:28:05.092 --rc geninfo_all_blocks=1 00:28:05.092 --rc geninfo_unexecuted_blocks=1 00:28:05.092 00:28:05.092 ' 00:28:05.092 11:55:35 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.092 11:55:35 -- nvmf/common.sh@7 -- # uname -s 00:28:05.092 11:55:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.092 11:55:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.092 11:55:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.092 11:55:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.092 11:55:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.092 11:55:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.092 11:55:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.092 11:55:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.092 11:55:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.092 11:55:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.092 11:55:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:05.092 11:55:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:05.092 11:55:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.092 11:55:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.092 11:55:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.092 11:55:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:05.092 11:55:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.092 11:55:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.092 11:55:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.093 11:55:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.093 11:55:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.093 11:55:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.093 11:55:35 -- paths/export.sh@5 -- # export PATH 00:28:05.093 11:55:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.093 11:55:35 -- nvmf/common.sh@46 -- # : 0 00:28:05.093 11:55:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:05.093 11:55:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:05.093 11:55:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:05.093 11:55:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.093 11:55:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.093 11:55:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:05.093 11:55:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:05.093 11:55:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:05.093 11:55:35 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:05.093 11:55:35 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:05.093 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:05.093 11:55:35 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:05.093 00:28:05.093 real 0m0.220s 00:28:05.093 user 0m0.122s 00:28:05.093 sys 0m0.116s 00:28:05.093 11:55:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:05.093 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.093 ************************************ 00:28:05.093 END TEST nvmf_discovery_remove_ifc 00:28:05.093 ************************************ 00:28:05.093 11:55:35 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:05.093 11:55:35 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:05.093 11:55:35 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:05.093 11:55:35 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:05.093 11:55:35 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:05.093 11:55:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:05.093 11:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:05.093 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.093 ************************************ 00:28:05.093 START TEST nvmf_bdevperf 00:28:05.093 ************************************ 00:28:05.093 11:55:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:05.093 * Looking for test storage... 00:28:05.093 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:05.093 11:55:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:05.093 11:55:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:05.093 11:55:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:05.093 11:55:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:05.093 11:55:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:05.093 11:55:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:05.093 11:55:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:05.093 11:55:35 -- scripts/common.sh@335 -- # IFS=.-: 00:28:05.093 11:55:35 -- scripts/common.sh@335 -- # read -ra ver1 00:28:05.093 11:55:35 -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.093 11:55:35 -- scripts/common.sh@336 -- # read -ra ver2 00:28:05.093 11:55:35 -- scripts/common.sh@337 -- # local 'op=<' 00:28:05.093 11:55:35 -- scripts/common.sh@339 -- # ver1_l=2 00:28:05.093 11:55:35 -- scripts/common.sh@340 -- # ver2_l=1 00:28:05.093 11:55:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:05.093 11:55:35 -- scripts/common.sh@343 -- # case "$op" in 00:28:05.093 11:55:35 -- scripts/common.sh@344 -- # : 1 00:28:05.093 11:55:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:05.093 11:55:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.353 11:55:35 -- scripts/common.sh@364 -- # decimal 1 00:28:05.353 11:55:35 -- scripts/common.sh@352 -- # local d=1 00:28:05.353 11:55:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.353 11:55:35 -- scripts/common.sh@354 -- # echo 1 00:28:05.353 11:55:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:05.353 11:55:35 -- scripts/common.sh@365 -- # decimal 2 00:28:05.353 11:55:35 -- scripts/common.sh@352 -- # local d=2 00:28:05.353 11:55:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.353 11:55:35 -- scripts/common.sh@354 -- # echo 2 00:28:05.353 11:55:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:05.353 11:55:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:05.353 11:55:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:05.353 11:55:35 -- scripts/common.sh@367 -- # return 0 00:28:05.353 11:55:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.353 11:55:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.353 --rc genhtml_branch_coverage=1 00:28:05.353 --rc genhtml_function_coverage=1 00:28:05.353 --rc genhtml_legend=1 00:28:05.353 --rc geninfo_all_blocks=1 00:28:05.353 --rc geninfo_unexecuted_blocks=1 00:28:05.353 00:28:05.353 ' 00:28:05.353 11:55:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.353 --rc genhtml_branch_coverage=1 00:28:05.353 --rc genhtml_function_coverage=1 00:28:05.353 --rc genhtml_legend=1 00:28:05.353 --rc geninfo_all_blocks=1 00:28:05.353 --rc geninfo_unexecuted_blocks=1 00:28:05.353 00:28:05.353 ' 00:28:05.353 11:55:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.353 --rc genhtml_branch_coverage=1 00:28:05.353 --rc genhtml_function_coverage=1 00:28:05.353 --rc genhtml_legend=1 00:28:05.353 --rc geninfo_all_blocks=1 00:28:05.353 --rc geninfo_unexecuted_blocks=1 00:28:05.353 00:28:05.353 ' 00:28:05.353 11:55:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.353 --rc genhtml_branch_coverage=1 00:28:05.353 --rc genhtml_function_coverage=1 00:28:05.353 --rc genhtml_legend=1 00:28:05.353 --rc geninfo_all_blocks=1 00:28:05.353 --rc geninfo_unexecuted_blocks=1 00:28:05.353 00:28:05.353 ' 00:28:05.353 11:55:35 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.353 11:55:35 -- nvmf/common.sh@7 -- # uname -s 00:28:05.353 11:55:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.353 11:55:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.353 11:55:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.353 11:55:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.353 11:55:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.353 11:55:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.353 11:55:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.353 11:55:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.353 11:55:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.353 11:55:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.353 11:55:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:05.353 11:55:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:05.353 11:55:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.353 11:55:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.353 11:55:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.353 11:55:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:05.353 11:55:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.353 11:55:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.353 11:55:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.353 11:55:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.353 11:55:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.353 11:55:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.353 11:55:35 -- paths/export.sh@5 -- # export PATH 00:28:05.353 11:55:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.353 11:55:35 -- nvmf/common.sh@46 -- # : 0 00:28:05.353 11:55:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:05.353 11:55:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:05.353 11:55:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:05.353 11:55:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.353 11:55:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.353 11:55:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:05.353 11:55:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:05.353 11:55:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:05.353 11:55:35 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.353 11:55:35 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:05.353 11:55:35 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:05.353 11:55:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:05.353 11:55:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.353 11:55:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:05.353 11:55:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:05.353 11:55:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:05.353 11:55:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.353 11:55:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.353 11:55:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.353 11:55:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:05.353 11:55:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:05.353 11:55:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:05.353 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:28:11.923 11:55:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:11.923 11:55:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:11.923 11:55:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:11.923 11:55:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:11.923 11:55:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:11.923 11:55:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:11.923 11:55:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:11.923 11:55:42 -- nvmf/common.sh@294 -- # net_devs=() 00:28:11.923 11:55:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:11.923 11:55:42 -- nvmf/common.sh@295 -- # e810=() 00:28:11.923 11:55:42 -- nvmf/common.sh@295 -- # local -ga e810 00:28:11.923 11:55:42 -- nvmf/common.sh@296 -- # x722=() 00:28:11.923 11:55:42 -- nvmf/common.sh@296 -- # local -ga x722 00:28:11.923 11:55:42 -- nvmf/common.sh@297 -- # mlx=() 00:28:11.923 11:55:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:11.923 11:55:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.923 11:55:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:11.923 11:55:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:11.923 11:55:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:11.923 11:55:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:11.923 11:55:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:11.923 11:55:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:11.923 11:55:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:11.923 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:11.923 11:55:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:11.923 11:55:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:11.923 11:55:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:11.923 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:11.923 11:55:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:11.923 11:55:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:11.923 11:55:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:11.923 11:55:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:11.923 11:55:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.923 11:55:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:11.923 11:55:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.923 11:55:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:11.923 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:11.924 11:55:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.924 11:55:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.924 11:55:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:11.924 11:55:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.924 11:55:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:11.924 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:11.924 11:55:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.924 11:55:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:11.924 11:55:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:11.924 11:55:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:11.924 11:55:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:11.924 11:55:42 -- nvmf/common.sh@57 -- # uname 00:28:11.924 11:55:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:11.924 11:55:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:11.924 11:55:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:11.924 11:55:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:11.924 11:55:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:11.924 11:55:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:11.924 11:55:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:11.924 11:55:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:11.924 11:55:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:11.924 11:55:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:11.924 11:55:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:11.924 11:55:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:11.924 11:55:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:11.924 11:55:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:11.924 11:55:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:11.924 11:55:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:11.924 11:55:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:11.924 11:55:42 -- nvmf/common.sh@104 -- # continue 2 00:28:11.924 11:55:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:11.924 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:11.924 11:55:42 -- nvmf/common.sh@104 -- # continue 2 00:28:11.924 11:55:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:11.924 11:55:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:11.924 11:55:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:11.924 11:55:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:11.924 11:55:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:11.924 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:11.924 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:11.924 altname enp217s0f0np0 00:28:11.924 altname ens818f0np0 00:28:11.924 inet 192.168.100.8/24 scope global mlx_0_0 00:28:11.924 valid_lft forever preferred_lft forever 00:28:11.924 11:55:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:11.924 11:55:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:11.924 11:55:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:11.924 11:55:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:11.924 11:55:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:11.924 11:55:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:11.924 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:11.924 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:11.924 altname enp217s0f1np1 00:28:11.924 altname ens818f1np1 00:28:11.924 inet 192.168.100.9/24 scope global mlx_0_1 00:28:11.924 valid_lft forever preferred_lft forever 00:28:11.924 11:55:42 -- nvmf/common.sh@410 -- # return 0 00:28:11.924 11:55:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:11.924 11:55:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:11.924 11:55:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:11.924 11:55:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:11.924 11:55:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:11.924 11:55:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:11.924 11:55:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:11.924 11:55:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:11.924 11:55:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:12.184 11:55:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:12.184 11:55:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:12.184 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.184 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:12.184 11:55:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:12.184 11:55:42 -- nvmf/common.sh@104 -- # continue 2 00:28:12.184 11:55:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:12.184 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.184 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:12.184 11:55:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.184 11:55:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:12.184 11:55:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:12.184 11:55:42 -- nvmf/common.sh@104 -- # continue 2 00:28:12.184 11:55:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:12.184 11:55:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:12.184 11:55:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:12.184 11:55:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:12.184 11:55:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:12.184 11:55:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:12.184 11:55:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:12.184 11:55:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:12.184 192.168.100.9' 00:28:12.184 11:55:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:12.184 192.168.100.9' 00:28:12.184 11:55:42 -- nvmf/common.sh@445 -- # head -n 1 00:28:12.184 11:55:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:12.184 11:55:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:12.184 192.168.100.9' 00:28:12.184 11:55:42 -- nvmf/common.sh@446 -- # tail -n +2 00:28:12.184 11:55:42 -- nvmf/common.sh@446 -- # head -n 1 00:28:12.184 11:55:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:12.184 11:55:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:12.184 11:55:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:12.184 11:55:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:12.184 11:55:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:12.184 11:55:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:12.184 11:55:42 -- host/bdevperf.sh@25 -- # tgt_init 00:28:12.184 11:55:42 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:12.184 11:55:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:12.184 11:55:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.184 11:55:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.184 11:55:42 -- nvmf/common.sh@469 -- # nvmfpid=3892416 00:28:12.184 11:55:42 -- nvmf/common.sh@470 -- # waitforlisten 3892416 00:28:12.184 11:55:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:12.184 11:55:42 -- common/autotest_common.sh@829 -- # '[' -z 3892416 ']' 00:28:12.184 11:55:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.184 11:55:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.184 11:55:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.184 11:55:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.184 11:55:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.184 [2024-12-03 11:55:42.675485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:12.184 [2024-12-03 11:55:42.675539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.184 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.184 [2024-12-03 11:55:42.744832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.443 [2024-12-03 11:55:42.819605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:12.443 [2024-12-03 11:55:42.819715] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.443 [2024-12-03 11:55:42.819725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.443 [2024-12-03 11:55:42.819733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.443 [2024-12-03 11:55:42.819774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.443 [2024-12-03 11:55:42.819876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.443 [2024-12-03 11:55:42.819878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.009 11:55:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.009 11:55:43 -- common/autotest_common.sh@862 -- # return 0 00:28:13.009 11:55:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:13.009 11:55:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.009 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 11:55:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.009 11:55:43 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:13.009 11:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 [2024-12-03 11:55:43.573663] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ebe860/0x1ec2d50) succeed. 00:28:13.009 [2024-12-03 11:55:43.582953] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ebfdb0/0x1f043f0) succeed. 00:28:13.268 11:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 11:55:43 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:13.268 11:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 Malloc0 00:28:13.268 11:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 11:55:43 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.268 11:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 11:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 11:55:43 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:13.268 11:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 11:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 11:55:43 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:13.268 11:55:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 [2024-12-03 11:55:43.728872] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:13.268 11:55:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 11:55:43 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:13.268 11:55:43 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:13.268 11:55:43 -- nvmf/common.sh@520 -- # config=() 00:28:13.268 11:55:43 -- nvmf/common.sh@520 -- # local subsystem config 00:28:13.268 11:55:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:13.268 11:55:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:13.268 { 00:28:13.268 "params": { 00:28:13.268 "name": "Nvme$subsystem", 00:28:13.268 "trtype": "$TEST_TRANSPORT", 00:28:13.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.268 "adrfam": "ipv4", 00:28:13.268 "trsvcid": "$NVMF_PORT", 00:28:13.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.268 "hdgst": ${hdgst:-false}, 00:28:13.268 "ddgst": ${ddgst:-false} 00:28:13.268 }, 00:28:13.268 "method": "bdev_nvme_attach_controller" 00:28:13.268 } 00:28:13.268 EOF 00:28:13.268 )") 00:28:13.268 11:55:43 -- nvmf/common.sh@542 -- # cat 00:28:13.268 11:55:43 -- nvmf/common.sh@544 -- # jq . 00:28:13.268 11:55:43 -- nvmf/common.sh@545 -- # IFS=, 00:28:13.268 11:55:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:13.268 "params": { 00:28:13.268 "name": "Nvme1", 00:28:13.268 "trtype": "rdma", 00:28:13.268 "traddr": "192.168.100.8", 00:28:13.268 "adrfam": "ipv4", 00:28:13.268 "trsvcid": "4420", 00:28:13.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.268 "hdgst": false, 00:28:13.268 "ddgst": false 00:28:13.268 }, 00:28:13.268 "method": "bdev_nvme_attach_controller" 00:28:13.268 }' 00:28:13.268 [2024-12-03 11:55:43.779183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:13.269 [2024-12-03 11:55:43.779232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892707 ] 00:28:13.269 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.269 [2024-12-03 11:55:43.844269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.527 [2024-12-03 11:55:43.916662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.527 Running I/O for 1 seconds... 00:28:14.901 00:28:14.901 Latency(us) 00:28:14.901 [2024-12-03T10:55:45.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.901 [2024-12-03T10:55:45.515Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:14.901 Verification LBA range: start 0x0 length 0x4000 00:28:14.901 Nvme1n1 : 1.00 25550.31 99.81 0.00 0.00 4986.00 1271.40 11848.91 00:28:14.901 [2024-12-03T10:55:45.515Z] =================================================================================================================== 00:28:14.901 [2024-12-03T10:55:45.515Z] Total : 25550.31 99.81 0.00 0.00 4986.00 1271.40 11848.91 00:28:14.901 11:55:45 -- host/bdevperf.sh@30 -- # bdevperfpid=3892983 00:28:14.901 11:55:45 -- host/bdevperf.sh@32 -- # sleep 3 00:28:14.901 11:55:45 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:14.901 11:55:45 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:14.901 11:55:45 -- nvmf/common.sh@520 -- # config=() 00:28:14.901 11:55:45 -- nvmf/common.sh@520 -- # local subsystem config 00:28:14.901 11:55:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:14.901 11:55:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:14.901 { 00:28:14.901 "params": { 00:28:14.901 "name": "Nvme$subsystem", 00:28:14.901 "trtype": "$TEST_TRANSPORT", 00:28:14.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.901 "adrfam": "ipv4", 00:28:14.901 "trsvcid": "$NVMF_PORT", 00:28:14.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.901 "hdgst": ${hdgst:-false}, 00:28:14.901 "ddgst": ${ddgst:-false} 00:28:14.901 }, 00:28:14.901 "method": "bdev_nvme_attach_controller" 00:28:14.901 } 00:28:14.901 EOF 00:28:14.901 )") 00:28:14.901 11:55:45 -- nvmf/common.sh@542 -- # cat 00:28:14.901 11:55:45 -- nvmf/common.sh@544 -- # jq . 00:28:14.902 11:55:45 -- nvmf/common.sh@545 -- # IFS=, 00:28:14.902 11:55:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:14.902 "params": { 00:28:14.902 "name": "Nvme1", 00:28:14.902 "trtype": "rdma", 00:28:14.902 "traddr": "192.168.100.8", 00:28:14.902 "adrfam": "ipv4", 00:28:14.902 "trsvcid": "4420", 00:28:14.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.902 "hdgst": false, 00:28:14.902 "ddgst": false 00:28:14.902 }, 00:28:14.902 "method": "bdev_nvme_attach_controller" 00:28:14.902 }' 00:28:14.902 [2024-12-03 11:55:45.367553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:14.902 [2024-12-03 11:55:45.367604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892983 ] 00:28:14.902 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.902 [2024-12-03 11:55:45.435539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.902 [2024-12-03 11:55:45.498710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.159 Running I/O for 15 seconds... 00:28:18.443 11:55:48 -- host/bdevperf.sh@33 -- # kill -9 3892416 00:28:18.443 11:55:48 -- host/bdevperf.sh@35 -- # sleep 3 00:28:19.014 [2024-12-03 11:55:49.354448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.014 [2024-12-03 11:55:49.354487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:28:19.015 [2024-12-03 11:55:49.354943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.354979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.354990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.354998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.355008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x182f00 00:28:19.015 [2024-12-03 11:55:49.355017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.355027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.015 [2024-12-03 11:55:49.355035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.015 [2024-12-03 11:55:49.355046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.016 [2024-12-03 11:55:49.355282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.016 [2024-12-03 11:55:49.355301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.016 [2024-12-03 11:55:49.355359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.016 [2024-12-03 11:55:49.355436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182f00 00:28:19.016 [2024-12-03 11:55:49.355534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.016 [2024-12-03 11:55:49.355553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.016 [2024-12-03 11:55:49.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183000 00:28:19.016 [2024-12-03 11:55:49.355610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.355746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182f00 00:28:19.017 [2024-12-03 11:55:49.355936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.355956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.355976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.355986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.355995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.356051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.017 [2024-12-03 11:55:49.356131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:28:19.017 [2024-12-03 11:55:49.356179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.017 [2024-12-03 11:55:49.356189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183000 00:28:19.018 [2024-12-03 11:55:49.356694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182f00 00:28:19.018 [2024-12-03 11:55:49.356712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.018 [2024-12-03 11:55:49.356721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.018 [2024-12-03 11:55:49.356729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182f00 00:28:19.019 [2024-12-03 11:55:49.356747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x182f00 00:28:19.019 [2024-12-03 11:55:49.356766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x182f00 00:28:19.019 [2024-12-03 11:55:49.356784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183000 00:28:19.019 [2024-12-03 11:55:49.356801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183000 00:28:19.019 [2024-12-03 11:55:49.356820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.019 [2024-12-03 11:55:49.356839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.019 [2024-12-03 11:55:49.356857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182f00 00:28:19.019 [2024-12-03 11:55:49.356876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183000 00:28:19.019 [2024-12-03 11:55:49.356894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.356903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.019 [2024-12-03 11:55:49.356911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb52c000 sqhd:5310 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.358923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.019 [2024-12-03 11:55:49.358962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.019 [2024-12-03 11:55:49.358991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22056 len:8 PRP1 0x0 PRP2 0x0 00:28:19.019 [2024-12-03 11:55:49.359022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.019 [2024-12-03 11:55:49.359115] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:19.019 [2024-12-03 11:55:49.360660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.019 [2024-12-03 11:55:49.374471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.019 [2024-12-03 11:55:49.378049] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.019 [2024-12-03 11:55:49.378102] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.019 [2024-12-03 11:55:49.378129] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:19.955 [2024-12-03 11:55:50.382318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.955 [2024-12-03 11:55:50.382391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.955 [2024-12-03 11:55:50.382732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.955 [2024-12-03 11:55:50.382767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.955 [2024-12-03 11:55:50.382799] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:19.955 [2024-12-03 11:55:50.383056] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:19.955 [2024-12-03 11:55:50.384640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.955 [2024-12-03 11:55:50.394812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.955 [2024-12-03 11:55:50.396939] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.955 [2024-12-03 11:55:50.396960] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.956 [2024-12-03 11:55:50.396968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:20.894 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3892416 Killed "${NVMF_APP[@]}" "$@" 00:28:20.894 11:55:51 -- host/bdevperf.sh@36 -- # tgt_init 00:28:20.894 11:55:51 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:20.894 11:55:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:20.894 11:55:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.894 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:28:20.894 11:55:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:20.894 11:55:51 -- nvmf/common.sh@469 -- # nvmfpid=3893892 00:28:20.894 11:55:51 -- nvmf/common.sh@470 -- # waitforlisten 3893892 00:28:20.894 11:55:51 -- common/autotest_common.sh@829 -- # '[' -z 3893892 ']' 00:28:20.894 11:55:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.894 11:55:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.894 11:55:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.894 11:55:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.894 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:28:20.894 [2024-12-03 11:55:51.369845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:20.894 [2024-12-03 11:55:51.369897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.894 [2024-12-03 11:55:51.400948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.894 [2024-12-03 11:55:51.400977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.894 [2024-12-03 11:55:51.401079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.894 [2024-12-03 11:55:51.401091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.894 [2024-12-03 11:55:51.401101] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:20.894 [2024-12-03 11:55:51.402344] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.894 [2024-12-03 11:55:51.402912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.894 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.894 [2024-12-03 11:55:51.414212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.894 [2024-12-03 11:55:51.416365] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:20.894 [2024-12-03 11:55:51.416386] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:20.894 [2024-12-03 11:55:51.416394] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:20.894 [2024-12-03 11:55:51.442207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:21.154 [2024-12-03 11:55:51.516580] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:21.154 [2024-12-03 11:55:51.516688] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.154 [2024-12-03 11:55:51.516697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.154 [2024-12-03 11:55:51.516710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.154 [2024-12-03 11:55:51.516753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.154 [2024-12-03 11:55:51.516819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.154 [2024-12-03 11:55:51.516821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.722 11:55:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.722 11:55:52 -- common/autotest_common.sh@862 -- # return 0 00:28:21.722 11:55:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:21.722 11:55:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.722 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.722 11:55:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.722 11:55:52 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:21.722 11:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.722 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.722 [2024-12-03 11:55:52.284151] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x108c860/0x1090d50) succeed. 00:28:21.722 [2024-12-03 11:55:52.293527] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x108ddb0/0x10d23f0) succeed. 00:28:21.982 11:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.982 11:55:52 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.982 11:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.982 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.982 Malloc0 00:28:21.982 11:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.982 11:55:52 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.982 11:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.982 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.982 [2024-12-03 11:55:52.420488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:21.982 [2024-12-03 11:55:52.420523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.982 [2024-12-03 11:55:52.420654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.982 [2024-12-03 11:55:52.420665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.982 [2024-12-03 11:55:52.420676] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:21.982 [2024-12-03 11:55:52.421998] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:21.982 [2024-12-03 11:55:52.422416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.982 11:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.982 11:55:52 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.982 11:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.982 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.982 11:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.982 11:55:52 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:21.982 11:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.982 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.982 [2024-12-03 11:55:52.433961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.982 [2024-12-03 11:55:52.435058] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:21.982 11:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.982 11:55:52 -- host/bdevperf.sh@38 -- # wait 3892983 00:28:21.982 [2024-12-03 11:55:52.470668] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:31.957 00:28:31.957 Latency(us) 00:28:31.957 [2024-12-03T10:56:02.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.957 [2024-12-03T10:56:02.571Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:31.957 Verification LBA range: start 0x0 length 0x4000 00:28:31.957 Nvme1n1 : 15.00 18690.66 73.01 16309.84 0.00 3646.23 579.99 1033476.51 00:28:31.957 [2024-12-03T10:56:02.571Z] =================================================================================================================== 00:28:31.957 [2024-12-03T10:56:02.571Z] Total : 18690.66 73.01 16309.84 0.00 3646.23 579.99 1033476.51 00:28:31.957 11:56:00 -- host/bdevperf.sh@39 -- # sync 00:28:31.957 11:56:00 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.957 11:56:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.957 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:28:31.957 11:56:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.957 11:56:00 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:31.957 11:56:00 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:31.957 11:56:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:31.957 11:56:00 -- nvmf/common.sh@116 -- # sync 00:28:31.957 11:56:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:31.957 11:56:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:31.957 11:56:00 -- nvmf/common.sh@119 -- # set +e 00:28:31.957 11:56:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:31.957 11:56:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:31.957 rmmod nvme_rdma 00:28:31.957 rmmod nvme_fabrics 00:28:31.957 11:56:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:31.957 11:56:01 -- nvmf/common.sh@123 -- # set -e 00:28:31.957 11:56:01 -- nvmf/common.sh@124 -- # return 0 00:28:31.957 11:56:01 -- nvmf/common.sh@477 -- # '[' -n 3893892 ']' 00:28:31.957 11:56:01 -- nvmf/common.sh@478 -- # killprocess 3893892 00:28:31.957 11:56:01 -- common/autotest_common.sh@936 -- # '[' -z 3893892 ']' 00:28:31.957 11:56:01 -- common/autotest_common.sh@940 -- # kill -0 3893892 00:28:31.957 11:56:01 -- common/autotest_common.sh@941 -- # uname 00:28:31.957 11:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:31.957 11:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3893892 00:28:31.957 11:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:31.957 11:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:31.957 11:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3893892' 00:28:31.957 killing process with pid 3893892 00:28:31.957 11:56:01 -- common/autotest_common.sh@955 -- # kill 3893892 00:28:31.957 11:56:01 -- common/autotest_common.sh@960 -- # wait 3893892 00:28:31.957 11:56:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:31.957 11:56:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:31.957 00:28:31.957 real 0m25.826s 00:28:31.957 user 1m5.007s 00:28:31.957 sys 0m6.389s 00:28:31.957 11:56:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:31.957 11:56:01 -- common/autotest_common.sh@10 -- # set +x 00:28:31.957 ************************************ 00:28:31.957 END TEST nvmf_bdevperf 00:28:31.957 ************************************ 00:28:31.957 11:56:01 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:31.957 11:56:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:31.957 11:56:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:31.957 11:56:01 -- common/autotest_common.sh@10 -- # set +x 00:28:31.957 ************************************ 00:28:31.957 START TEST nvmf_target_disconnect 00:28:31.957 ************************************ 00:28:31.957 11:56:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:31.957 * Looking for test storage... 00:28:31.957 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:31.957 11:56:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:31.957 11:56:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:31.957 11:56:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:31.957 11:56:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:31.957 11:56:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:31.957 11:56:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:31.957 11:56:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:31.957 11:56:01 -- scripts/common.sh@335 -- # IFS=.-: 00:28:31.957 11:56:01 -- scripts/common.sh@335 -- # read -ra ver1 00:28:31.957 11:56:01 -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.957 11:56:01 -- scripts/common.sh@336 -- # read -ra ver2 00:28:31.957 11:56:01 -- scripts/common.sh@337 -- # local 'op=<' 00:28:31.957 11:56:01 -- scripts/common.sh@339 -- # ver1_l=2 00:28:31.957 11:56:01 -- scripts/common.sh@340 -- # ver2_l=1 00:28:31.957 11:56:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:31.957 11:56:01 -- scripts/common.sh@343 -- # case "$op" in 00:28:31.957 11:56:01 -- scripts/common.sh@344 -- # : 1 00:28:31.957 11:56:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:31.957 11:56:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.957 11:56:01 -- scripts/common.sh@364 -- # decimal 1 00:28:31.957 11:56:01 -- scripts/common.sh@352 -- # local d=1 00:28:31.957 11:56:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.957 11:56:01 -- scripts/common.sh@354 -- # echo 1 00:28:31.957 11:56:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:31.957 11:56:01 -- scripts/common.sh@365 -- # decimal 2 00:28:31.958 11:56:01 -- scripts/common.sh@352 -- # local d=2 00:28:31.958 11:56:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.958 11:56:01 -- scripts/common.sh@354 -- # echo 2 00:28:31.958 11:56:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:31.958 11:56:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:31.958 11:56:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:31.958 11:56:01 -- scripts/common.sh@367 -- # return 0 00:28:31.958 11:56:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.958 11:56:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.958 --rc genhtml_branch_coverage=1 00:28:31.958 --rc genhtml_function_coverage=1 00:28:31.958 --rc genhtml_legend=1 00:28:31.958 --rc geninfo_all_blocks=1 00:28:31.958 --rc geninfo_unexecuted_blocks=1 00:28:31.958 00:28:31.958 ' 00:28:31.958 11:56:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.958 --rc genhtml_branch_coverage=1 00:28:31.958 --rc genhtml_function_coverage=1 00:28:31.958 --rc genhtml_legend=1 00:28:31.958 --rc geninfo_all_blocks=1 00:28:31.958 --rc geninfo_unexecuted_blocks=1 00:28:31.958 00:28:31.958 ' 00:28:31.958 11:56:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.958 --rc genhtml_branch_coverage=1 00:28:31.958 --rc genhtml_function_coverage=1 00:28:31.958 --rc genhtml_legend=1 00:28:31.958 --rc geninfo_all_blocks=1 00:28:31.958 --rc geninfo_unexecuted_blocks=1 00:28:31.958 00:28:31.958 ' 00:28:31.958 11:56:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.958 --rc genhtml_branch_coverage=1 00:28:31.958 --rc genhtml_function_coverage=1 00:28:31.958 --rc genhtml_legend=1 00:28:31.958 --rc geninfo_all_blocks=1 00:28:31.958 --rc geninfo_unexecuted_blocks=1 00:28:31.958 00:28:31.958 ' 00:28:31.958 11:56:01 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.958 11:56:01 -- nvmf/common.sh@7 -- # uname -s 00:28:31.958 11:56:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.958 11:56:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.958 11:56:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.958 11:56:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.958 11:56:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.958 11:56:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.958 11:56:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.958 11:56:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.958 11:56:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.958 11:56:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.958 11:56:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:31.958 11:56:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:31.958 11:56:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.958 11:56:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.958 11:56:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.958 11:56:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:31.958 11:56:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.958 11:56:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.958 11:56:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.958 11:56:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.958 11:56:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.958 11:56:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.958 11:56:01 -- paths/export.sh@5 -- # export PATH 00:28:31.958 11:56:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.958 11:56:01 -- nvmf/common.sh@46 -- # : 0 00:28:31.958 11:56:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.958 11:56:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.958 11:56:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.958 11:56:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.958 11:56:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.958 11:56:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.958 11:56:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.958 11:56:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.958 11:56:01 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:31.958 11:56:01 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:31.958 11:56:01 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:31.958 11:56:01 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:31.958 11:56:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:31.958 11:56:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.958 11:56:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:31.958 11:56:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:31.958 11:56:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:31.958 11:56:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.958 11:56:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.958 11:56:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.958 11:56:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:31.958 11:56:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:31.958 11:56:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:31.958 11:56:01 -- common/autotest_common.sh@10 -- # set +x 00:28:38.519 11:56:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:38.519 11:56:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:38.519 11:56:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:38.519 11:56:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:38.519 11:56:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:38.519 11:56:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:38.519 11:56:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:38.519 11:56:07 -- nvmf/common.sh@294 -- # net_devs=() 00:28:38.519 11:56:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:38.519 11:56:07 -- nvmf/common.sh@295 -- # e810=() 00:28:38.519 11:56:07 -- nvmf/common.sh@295 -- # local -ga e810 00:28:38.519 11:56:07 -- nvmf/common.sh@296 -- # x722=() 00:28:38.519 11:56:07 -- nvmf/common.sh@296 -- # local -ga x722 00:28:38.519 11:56:07 -- nvmf/common.sh@297 -- # mlx=() 00:28:38.519 11:56:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:38.520 11:56:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.520 11:56:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:38.520 11:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:38.520 11:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:38.520 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:38.520 11:56:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:38.520 11:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:38.520 11:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:38.520 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:38.520 11:56:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:38.520 11:56:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:38.520 11:56:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:38.520 11:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.520 11:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:38.520 11:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.520 11:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:38.520 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:38.520 11:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:38.520 11:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.520 11:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:38.520 11:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.520 11:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:38.520 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:38.520 11:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.520 11:56:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:38.520 11:56:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:38.520 11:56:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:38.520 11:56:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:38.520 11:56:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:38.520 11:56:07 -- nvmf/common.sh@57 -- # uname 00:28:38.520 11:56:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:38.520 11:56:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:38.520 11:56:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:38.520 11:56:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:38.520 11:56:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:38.520 11:56:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:38.520 11:56:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:38.520 11:56:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:38.520 11:56:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:38.520 11:56:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:38.520 11:56:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:38.520 11:56:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:38.520 11:56:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:38.520 11:56:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:38.520 11:56:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:38.520 11:56:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:38.520 11:56:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:38.520 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.520 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:38.520 11:56:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:38.520 11:56:08 -- nvmf/common.sh@104 -- # continue 2 00:28:38.520 11:56:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:38.520 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.520 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:38.520 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.520 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:38.520 11:56:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:38.520 11:56:08 -- nvmf/common.sh@104 -- # continue 2 00:28:38.520 11:56:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:38.520 11:56:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:38.520 11:56:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:38.520 11:56:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:38.520 11:56:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:38.520 11:56:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:38.520 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:38.520 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:38.520 altname enp217s0f0np0 00:28:38.520 altname ens818f0np0 00:28:38.520 inet 192.168.100.8/24 scope global mlx_0_0 00:28:38.520 valid_lft forever preferred_lft forever 00:28:38.520 11:56:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:38.520 11:56:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:38.520 11:56:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:38.520 11:56:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:38.520 11:56:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:38.520 11:56:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:38.520 11:56:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:38.520 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:38.520 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:38.520 altname enp217s0f1np1 00:28:38.520 altname ens818f1np1 00:28:38.520 inet 192.168.100.9/24 scope global mlx_0_1 00:28:38.520 valid_lft forever preferred_lft forever 00:28:38.520 11:56:08 -- nvmf/common.sh@410 -- # return 0 00:28:38.521 11:56:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:38.521 11:56:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:38.521 11:56:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:38.521 11:56:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:38.521 11:56:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:38.521 11:56:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:38.521 11:56:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:38.521 11:56:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:38.521 11:56:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:38.521 11:56:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:38.521 11:56:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:38.521 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.521 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:38.521 11:56:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:38.521 11:56:08 -- nvmf/common.sh@104 -- # continue 2 00:28:38.521 11:56:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:38.521 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.521 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:38.521 11:56:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:38.521 11:56:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:38.521 11:56:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:38.521 11:56:08 -- nvmf/common.sh@104 -- # continue 2 00:28:38.521 11:56:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:38.521 11:56:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:38.521 11:56:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:38.521 11:56:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:38.521 11:56:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:38.521 11:56:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:38.521 11:56:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:38.521 11:56:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:38.521 192.168.100.9' 00:28:38.521 11:56:08 -- nvmf/common.sh@445 -- # head -n 1 00:28:38.521 11:56:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:38.521 192.168.100.9' 00:28:38.521 11:56:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:38.521 11:56:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:38.521 192.168.100.9' 00:28:38.521 11:56:08 -- nvmf/common.sh@446 -- # tail -n +2 00:28:38.521 11:56:08 -- nvmf/common.sh@446 -- # head -n 1 00:28:38.521 11:56:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:38.521 11:56:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:38.521 11:56:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:38.521 11:56:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:38.521 11:56:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:38.521 11:56:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:38.521 11:56:08 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:38.521 11:56:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:38.521 11:56:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.521 11:56:08 -- common/autotest_common.sh@10 -- # set +x 00:28:38.521 ************************************ 00:28:38.521 START TEST nvmf_target_disconnect_tc1 00:28:38.521 ************************************ 00:28:38.521 11:56:08 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:28:38.521 11:56:08 -- host/target_disconnect.sh@32 -- # set +e 00:28:38.521 11:56:08 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:38.521 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.521 [2024-12-03 11:56:08.279320] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:38.521 [2024-12-03 11:56:08.279372] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:38.521 [2024-12-03 11:56:08.279386] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:38.781 [2024-12-03 11:56:09.283403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:38.781 [2024-12-03 11:56:09.283463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:38.781 [2024-12-03 11:56:09.283506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:38.781 [2024-12-03 11:56:09.283564] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:38.781 [2024-12-03 11:56:09.283592] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:38.781 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:38.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:38.781 Initializing NVMe Controllers 00:28:38.781 11:56:09 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:38.781 11:56:09 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:38.781 11:56:09 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:28:38.781 11:56:09 -- common/autotest_common.sh@1142 -- # return 0 00:28:38.781 11:56:09 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:38.781 11:56:09 -- host/target_disconnect.sh@41 -- # set -e 00:28:38.781 00:28:38.781 real 0m1.128s 00:28:38.781 user 0m0.890s 00:28:38.781 sys 0m0.227s 00:28:38.781 11:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:38.781 11:56:09 -- common/autotest_common.sh@10 -- # set +x 00:28:38.781 ************************************ 00:28:38.781 END TEST nvmf_target_disconnect_tc1 00:28:38.781 ************************************ 00:28:38.781 11:56:09 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:38.781 11:56:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:38.781 11:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.781 11:56:09 -- common/autotest_common.sh@10 -- # set +x 00:28:38.781 ************************************ 00:28:38.781 START TEST nvmf_target_disconnect_tc2 00:28:38.781 ************************************ 00:28:38.781 11:56:09 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:28:38.781 11:56:09 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:38.781 11:56:09 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:38.781 11:56:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:38.781 11:56:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:38.781 11:56:09 -- common/autotest_common.sh@10 -- # set +x 00:28:38.781 11:56:09 -- nvmf/common.sh@469 -- # nvmfpid=3899133 00:28:38.781 11:56:09 -- nvmf/common.sh@470 -- # waitforlisten 3899133 00:28:38.781 11:56:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:38.781 11:56:09 -- common/autotest_common.sh@829 -- # '[' -z 3899133 ']' 00:28:38.781 11:56:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.781 11:56:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.781 11:56:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.781 11:56:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.781 11:56:09 -- common/autotest_common.sh@10 -- # set +x 00:28:39.040 [2024-12-03 11:56:09.400759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:39.040 [2024-12-03 11:56:09.400812] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.040 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.040 [2024-12-03 11:56:09.485616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.040 [2024-12-03 11:56:09.556635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:39.040 [2024-12-03 11:56:09.556739] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.040 [2024-12-03 11:56:09.556749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.040 [2024-12-03 11:56:09.556757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.040 [2024-12-03 11:56:09.556821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:39.040 [2024-12-03 11:56:09.556940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:39.040 [2024-12-03 11:56:09.556983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:39.040 [2024-12-03 11:56:09.556985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:39.977 11:56:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.977 11:56:10 -- common/autotest_common.sh@862 -- # return 0 00:28:39.977 11:56:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:39.977 11:56:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 11:56:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.977 11:56:10 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 Malloc0 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 [2024-12-03 11:56:10.316354] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eb73c0/0x1ec2dc0) succeed. 00:28:39.977 [2024-12-03 11:56:10.325811] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eb89b0/0x1f42e00) succeed. 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 [2024-12-03 11:56:10.471625] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:39.977 11:56:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.977 11:56:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.977 11:56:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.977 11:56:10 -- host/target_disconnect.sh@50 -- # reconnectpid=3899215 00:28:39.977 11:56:10 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:39.977 11:56:10 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:39.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.895 11:56:12 -- host/target_disconnect.sh@53 -- # kill -9 3899133 00:28:41.895 11:56:12 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Read completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 Write completed with error (sct=0, sc=8) 00:28:43.268 starting I/O failed 00:28:43.268 [2024-12-03 11:56:13.667754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.200 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3899133 Killed "${NVMF_APP[@]}" "$@" 00:28:44.200 11:56:14 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:28:44.200 11:56:14 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:44.200 11:56:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:44.200 11:56:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:44.200 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.200 11:56:14 -- nvmf/common.sh@469 -- # nvmfpid=3900001 00:28:44.200 11:56:14 -- nvmf/common.sh@470 -- # waitforlisten 3900001 00:28:44.200 11:56:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:44.200 11:56:14 -- common/autotest_common.sh@829 -- # '[' -z 3900001 ']' 00:28:44.200 11:56:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.200 11:56:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.200 11:56:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.200 11:56:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.200 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.200 [2024-12-03 11:56:14.550384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:44.200 [2024-12-03 11:56:14.550436] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.200 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.200 [2024-12-03 11:56:14.633476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Write completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 Read completed with error (sct=0, sc=8) 00:28:44.200 starting I/O failed 00:28:44.200 [2024-12-03 11:56:14.672887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.200 [2024-12-03 11:56:14.704918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:44.200 [2024-12-03 11:56:14.705023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.200 [2024-12-03 11:56:14.705033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.200 [2024-12-03 11:56:14.705041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.200 [2024-12-03 11:56:14.705165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:44.200 [2024-12-03 11:56:14.705273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:44.200 [2024-12-03 11:56:14.705382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:44.200 [2024-12-03 11:56:14.705383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:44.765 11:56:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.765 11:56:15 -- common/autotest_common.sh@862 -- # return 0 00:28:44.765 11:56:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:44.765 11:56:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:44.765 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 11:56:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.023 11:56:15 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 Malloc0 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 [2024-12-03 11:56:15.467519] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ab33c0/0x1abedc0) succeed. 00:28:45.023 [2024-12-03 11:56:15.476888] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ab49b0/0x1b3ee00) succeed. 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 [2024-12-03 11:56:15.617979] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:45.023 11:56:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.023 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 11:56:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.023 11:56:15 -- host/target_disconnect.sh@58 -- # wait 3899215 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Write completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 Read completed with error (sct=0, sc=8) 00:28:45.282 starting I/O failed 00:28:45.282 [2024-12-03 11:56:15.677955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 [2024-12-03 11:56:15.681750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.681806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.681828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.681838] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.681853] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.692062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.701852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.701892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.701909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.701919] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.701928] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.712164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.721817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.721860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.721878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.721888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.721896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.732153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.742021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.742074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.742091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.742100] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.742116] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.752217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.762211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.762251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.762269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.762278] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.762287] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.772396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.782126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.782167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.782183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.782193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.782201] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.792318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.802190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.802230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.802264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.282 [2024-12-03 11:56:15.802274] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.282 [2024-12-03 11:56:15.802283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.282 [2024-12-03 11:56:15.812421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.282 qpair failed and we were unable to recover it. 00:28:45.282 [2024-12-03 11:56:15.822198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.282 [2024-12-03 11:56:15.822243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.282 [2024-12-03 11:56:15.822261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.283 [2024-12-03 11:56:15.822270] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.283 [2024-12-03 11:56:15.822279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.283 [2024-12-03 11:56:15.832598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.283 qpair failed and we were unable to recover it. 00:28:45.283 [2024-12-03 11:56:15.842387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.283 [2024-12-03 11:56:15.842435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.283 [2024-12-03 11:56:15.842453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.283 [2024-12-03 11:56:15.842462] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.283 [2024-12-03 11:56:15.842472] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.283 [2024-12-03 11:56:15.852483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.283 qpair failed and we were unable to recover it. 00:28:45.283 [2024-12-03 11:56:15.862406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.283 [2024-12-03 11:56:15.862450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.283 [2024-12-03 11:56:15.862467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.283 [2024-12-03 11:56:15.862476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.283 [2024-12-03 11:56:15.862486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.283 [2024-12-03 11:56:15.872711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.283 qpair failed and we were unable to recover it. 00:28:45.283 [2024-12-03 11:56:15.882271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.283 [2024-12-03 11:56:15.882308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.283 [2024-12-03 11:56:15.882325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.283 [2024-12-03 11:56:15.882334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.283 [2024-12-03 11:56:15.882347] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.283 [2024-12-03 11:56:15.892668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.283 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:15.902422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:15.902464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:15.902482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:15.902492] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:15.902502] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:15.912956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:15.922581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:15.922624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:15.922648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:15.922658] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:15.922668] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:15.933029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:15.942584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:15.942628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:15.942645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:15.942654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:15.942663] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:15.953085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:15.962740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:15.962774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:15.962791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:15.962800] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:15.962809] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:15.972982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:15.982701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:15.982744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:15.982761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:15.982771] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:15.982779] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:15.993144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:16.002864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:16.002904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:16.002922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:16.002932] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:16.002941] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:16.013241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:16.022863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:16.022906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:16.022923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:16.022932] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:16.022941] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:16.033286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:16.042941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:16.042981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:16.042998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:16.043007] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.541 [2024-12-03 11:56:16.043016] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.541 [2024-12-03 11:56:16.053204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.541 qpair failed and we were unable to recover it. 00:28:45.541 [2024-12-03 11:56:16.062950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.541 [2024-12-03 11:56:16.062991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.541 [2024-12-03 11:56:16.063007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.541 [2024-12-03 11:56:16.063020] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.542 [2024-12-03 11:56:16.063028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.542 [2024-12-03 11:56:16.073381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.542 qpair failed and we were unable to recover it. 00:28:45.542 [2024-12-03 11:56:16.083000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.542 [2024-12-03 11:56:16.083037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.542 [2024-12-03 11:56:16.083054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.542 [2024-12-03 11:56:16.083063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.542 [2024-12-03 11:56:16.083072] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.542 [2024-12-03 11:56:16.093352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.542 qpair failed and we were unable to recover it. 00:28:45.542 [2024-12-03 11:56:16.103075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.542 [2024-12-03 11:56:16.103123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.542 [2024-12-03 11:56:16.103139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.542 [2024-12-03 11:56:16.103149] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.542 [2024-12-03 11:56:16.103158] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.542 [2024-12-03 11:56:16.113483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.542 qpair failed and we were unable to recover it. 00:28:45.542 [2024-12-03 11:56:16.123132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.542 [2024-12-03 11:56:16.123170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.542 [2024-12-03 11:56:16.123187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.542 [2024-12-03 11:56:16.123196] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.542 [2024-12-03 11:56:16.123204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.542 [2024-12-03 11:56:16.133418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.542 qpair failed and we were unable to recover it. 00:28:45.542 [2024-12-03 11:56:16.143176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.542 [2024-12-03 11:56:16.143215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.542 [2024-12-03 11:56:16.143232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.542 [2024-12-03 11:56:16.143241] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.542 [2024-12-03 11:56:16.143250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.153743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.163222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.163261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.163277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.800 [2024-12-03 11:56:16.163287] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.800 [2024-12-03 11:56:16.163295] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.173712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.183202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.183244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.183261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.800 [2024-12-03 11:56:16.183270] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.800 [2024-12-03 11:56:16.183279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.193772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.203332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.203374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.203391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.800 [2024-12-03 11:56:16.203401] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.800 [2024-12-03 11:56:16.203410] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.213575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.223398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.223437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.223453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.800 [2024-12-03 11:56:16.223463] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.800 [2024-12-03 11:56:16.223472] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.233937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.243462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.243504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.243524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.800 [2024-12-03 11:56:16.243533] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.800 [2024-12-03 11:56:16.243542] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.800 [2024-12-03 11:56:16.253799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.800 qpair failed and we were unable to recover it. 00:28:45.800 [2024-12-03 11:56:16.263496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.800 [2024-12-03 11:56:16.263533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.800 [2024-12-03 11:56:16.263549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.263558] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.263567] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.273967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.283612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.283652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.283668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.283678] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.283686] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.294170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.303670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.303708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.303724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.303733] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.303742] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.314051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.323686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.323726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.323742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.323751] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.323763] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.334018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.343670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.343714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.343730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.343740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.343750] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.354151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.363852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.363894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.363910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.363920] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.363930] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.374335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.383868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.383910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.383926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.383935] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.383944] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:45.801 [2024-12-03 11:56:16.394347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.801 qpair failed and we were unable to recover it. 00:28:45.801 [2024-12-03 11:56:16.403880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.801 [2024-12-03 11:56:16.403920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.801 [2024-12-03 11:56:16.403937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.801 [2024-12-03 11:56:16.403947] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.801 [2024-12-03 11:56:16.403956] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.059 [2024-12-03 11:56:16.414357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.059 qpair failed and we were unable to recover it. 00:28:46.059 [2024-12-03 11:56:16.423981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.059 [2024-12-03 11:56:16.424024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.059 [2024-12-03 11:56:16.424041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.059 [2024-12-03 11:56:16.424050] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.059 [2024-12-03 11:56:16.424058] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.434908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.443942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.443976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.443992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.444001] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.444010] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.454502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.464178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.464220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.464236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.464245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.464254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.474762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.484211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.484254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.484270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.484279] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.484287] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.494506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.504242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.504287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.504303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.504318] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.504327] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.514660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.524351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.524386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.524403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.524412] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.524421] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.534809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.544340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.544382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.544398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.544407] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.544416] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.555021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.564430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.564474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.564493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.564505] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.564516] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.574876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.584523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.584565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.584581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.584590] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.584599] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.594920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.604634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.604671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.604688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.604697] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.604706] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.614875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.624564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.624603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.624620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.624629] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.624638] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.634939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.644707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.644752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.644769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.644779] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.644788] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.060 [2024-12-03 11:56:16.655168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.060 qpair failed and we were unable to recover it. 00:28:46.060 [2024-12-03 11:56:16.664673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.060 [2024-12-03 11:56:16.664715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.060 [2024-12-03 11:56:16.664731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.060 [2024-12-03 11:56:16.664740] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.060 [2024-12-03 11:56:16.664749] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.675217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.684653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.684696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.684716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.319 [2024-12-03 11:56:16.684725] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.319 [2024-12-03 11:56:16.684734] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.695269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.704861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.704901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.704918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.319 [2024-12-03 11:56:16.704927] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.319 [2024-12-03 11:56:16.704936] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.715402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.724862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.724901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.724918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.319 [2024-12-03 11:56:16.724927] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.319 [2024-12-03 11:56:16.724935] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.735377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.744897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.744940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.744956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.319 [2024-12-03 11:56:16.744966] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.319 [2024-12-03 11:56:16.744974] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.755573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.764989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.765036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.765052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.319 [2024-12-03 11:56:16.765062] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.319 [2024-12-03 11:56:16.765071] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.319 [2024-12-03 11:56:16.775410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.319 qpair failed and we were unable to recover it. 00:28:46.319 [2024-12-03 11:56:16.784975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.319 [2024-12-03 11:56:16.785015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.319 [2024-12-03 11:56:16.785031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.785040] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.785049] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.795429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.805156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.805199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.805215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.805225] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.805234] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.815678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.825240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.825278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.825294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.825303] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.825312] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.835844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.845287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.845331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.845348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.845357] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.845366] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.855576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.865251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.865298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.865314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.865324] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.865332] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.875861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.885561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.885605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.885622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.885634] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.885645] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.895872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.905582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.905617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.905634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.905644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.905653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.320 [2024-12-03 11:56:16.916152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.320 qpair failed and we were unable to recover it. 00:28:46.320 [2024-12-03 11:56:16.925441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.320 [2024-12-03 11:56:16.925477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.320 [2024-12-03 11:56:16.925494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.320 [2024-12-03 11:56:16.925504] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.320 [2024-12-03 11:56:16.925513] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.579 [2024-12-03 11:56:16.936003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.579 qpair failed and we were unable to recover it. 00:28:46.579 [2024-12-03 11:56:16.945678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.579 [2024-12-03 11:56:16.945720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.579 [2024-12-03 11:56:16.945736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.579 [2024-12-03 11:56:16.945749] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.579 [2024-12-03 11:56:16.945758] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.579 [2024-12-03 11:56:16.956076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.579 qpair failed and we were unable to recover it. 00:28:46.579 [2024-12-03 11:56:16.965577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.579 [2024-12-03 11:56:16.965620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.579 [2024-12-03 11:56:16.965636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.579 [2024-12-03 11:56:16.965646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.579 [2024-12-03 11:56:16.965654] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.579 [2024-12-03 11:56:16.976115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.579 qpair failed and we were unable to recover it. 00:28:46.579 [2024-12-03 11:56:16.985665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.579 [2024-12-03 11:56:16.985709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.579 [2024-12-03 11:56:16.985726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.579 [2024-12-03 11:56:16.985735] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.579 [2024-12-03 11:56:16.985744] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.579 [2024-12-03 11:56:16.995976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.579 qpair failed and we were unable to recover it. 00:28:46.579 [2024-12-03 11:56:17.005749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.579 [2024-12-03 11:56:17.005789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.579 [2024-12-03 11:56:17.005807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.579 [2024-12-03 11:56:17.005817] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.579 [2024-12-03 11:56:17.005826] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.016192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.025776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.025819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.025836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.025846] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.025854] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.036049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.045796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.045844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.045862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.045872] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.045881] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.056403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.065888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.065929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.065946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.065956] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.065965] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.076609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.085963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.086002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.086019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.086029] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.086038] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.096349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.105899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.105941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.105958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.105968] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.105976] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.116370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.126068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.126107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.126132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.126142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.126150] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.136469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.146119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.146163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.146180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.146189] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.146198] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.156612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.166335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.166371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.166388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.166398] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.166407] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.580 [2024-12-03 11:56:17.176660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.580 qpair failed and we were unable to recover it. 00:28:46.580 [2024-12-03 11:56:17.186273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.580 [2024-12-03 11:56:17.186317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.580 [2024-12-03 11:56:17.186334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.580 [2024-12-03 11:56:17.186344] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.580 [2024-12-03 11:56:17.186353] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.839 [2024-12-03 11:56:17.196718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.839 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.206308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.206348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.206365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.206375] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.206384] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.216757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.226326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.226366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.226383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.226393] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.226402] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.236671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.246406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.246449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.246465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.246475] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.246484] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.256878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.266440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.266480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.266496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.266506] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.266515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.277129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.286570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.286617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.286634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.286644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.286653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.296938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.306589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.306634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.306650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.306660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.306668] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.317060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.326744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.326784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.326802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.326811] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.326820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.337145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.346678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.346718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.346736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.346746] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.346754] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.357197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.366828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.366868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.366884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.366894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.366902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.377213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.386891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.386928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.386945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.386955] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.386966] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.397301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.406901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.406942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.406960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.406970] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.406979] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.417294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.426979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.427023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.427041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.427050] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.427059] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:46.840 [2024-12-03 11:56:17.437620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.840 qpair failed and we were unable to recover it. 00:28:46.840 [2024-12-03 11:56:17.447090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.840 [2024-12-03 11:56:17.447133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.840 [2024-12-03 11:56:17.447151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.840 [2024-12-03 11:56:17.447160] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.840 [2024-12-03 11:56:17.447169] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.457442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.467181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.467221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.467237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.467247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.467255] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.477512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.487262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.487308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.487324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.487334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.487342] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.497516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.507279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.507323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.507339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.507348] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.507357] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.517477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.527409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.527456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.527473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.527483] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.527492] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.537548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.547407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.547444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.547463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.547472] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.547481] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.557739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.567512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.567554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.567574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.100 [2024-12-03 11:56:17.567584] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.100 [2024-12-03 11:56:17.567593] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.100 [2024-12-03 11:56:17.577902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.100 qpair failed and we were unable to recover it. 00:28:47.100 [2024-12-03 11:56:17.587512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.100 [2024-12-03 11:56:17.587553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.100 [2024-12-03 11:56:17.587569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.587579] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.587587] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.597805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.607549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.607591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.607609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.607618] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.607627] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.617832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.627682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.627729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.627746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.627756] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.627765] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.637947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.647646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.647688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.647705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.647714] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.647724] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.658040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.667759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.667801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.667818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.667828] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.667837] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.678207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.687766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.687801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.687818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.687828] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.687836] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.101 [2024-12-03 11:56:17.698133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.101 qpair failed and we were unable to recover it. 00:28:47.101 [2024-12-03 11:56:17.707883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.101 [2024-12-03 11:56:17.707928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.101 [2024-12-03 11:56:17.707945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.101 [2024-12-03 11:56:17.707954] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.101 [2024-12-03 11:56:17.707963] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.718601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.727893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.727933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.727949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.727959] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.727968] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.738280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.747919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.747963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.747982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.747992] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.748000] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.758427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.768026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.768068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.768084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.768093] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.768102] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.778324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.788102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.788154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.788171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.788180] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.788188] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.798319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.808161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.808199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.808216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.808226] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.808235] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.818538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.828253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.828295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.828312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.828322] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.828334] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.838613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.848324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.848367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.848384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.848393] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.848402] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.858637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.868417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.868456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.868472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.868482] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.868491] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.878566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.888260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.888304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.888321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.888330] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.888339] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.898725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.908344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.908385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.908401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.908411] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.908419] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.918955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.361 [2024-12-03 11:56:17.928355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.361 [2024-12-03 11:56:17.928398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.361 [2024-12-03 11:56:17.928415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.361 [2024-12-03 11:56:17.928425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.361 [2024-12-03 11:56:17.928434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.361 [2024-12-03 11:56:17.938916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.361 qpair failed and we were unable to recover it. 00:28:47.362 [2024-12-03 11:56:17.948489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.362 [2024-12-03 11:56:17.948530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.362 [2024-12-03 11:56:17.948546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.362 [2024-12-03 11:56:17.948555] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.362 [2024-12-03 11:56:17.948565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.362 [2024-12-03 11:56:17.958989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.362 qpair failed and we were unable to recover it. 00:28:47.362 [2024-12-03 11:56:17.968622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.362 [2024-12-03 11:56:17.968656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.362 [2024-12-03 11:56:17.968673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.362 [2024-12-03 11:56:17.968682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.362 [2024-12-03 11:56:17.968691] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:17.979165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:17.988575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:17.988619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:17.988636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:17.988646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:17.988655] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:17.999074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.008647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.008690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.008708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.008723] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.008733] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.019070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.028762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.028805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.028822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.028832] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.028841] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.039177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.048788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.048828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.048845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.048854] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.048863] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.059380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.068855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.068898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.068915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.068925] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.068933] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.079406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.089043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.089081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.089098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.089108] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.089121] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.099261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.108961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.109009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.109027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.109038] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.109047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.119498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.129028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.129069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.129086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.129096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.129105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.622 [2024-12-03 11:56:18.139320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-12-03 11:56:18.149160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.622 [2024-12-03 11:56:18.149201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.622 [2024-12-03 11:56:18.149219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.622 [2024-12-03 11:56:18.149229] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.622 [2024-12-03 11:56:18.149238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.623 [2024-12-03 11:56:18.159786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-12-03 11:56:18.169247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.623 [2024-12-03 11:56:18.169294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.623 [2024-12-03 11:56:18.169311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.623 [2024-12-03 11:56:18.169320] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.623 [2024-12-03 11:56:18.169329] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.623 [2024-12-03 11:56:18.179660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-12-03 11:56:18.189243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.623 [2024-12-03 11:56:18.189280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.623 [2024-12-03 11:56:18.189300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.623 [2024-12-03 11:56:18.189309] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.623 [2024-12-03 11:56:18.189317] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.623 [2024-12-03 11:56:18.199749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-12-03 11:56:18.209259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.623 [2024-12-03 11:56:18.209305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.623 [2024-12-03 11:56:18.209323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.623 [2024-12-03 11:56:18.209333] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.623 [2024-12-03 11:56:18.209342] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.623 [2024-12-03 11:56:18.219619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-12-03 11:56:18.229368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.623 [2024-12-03 11:56:18.229408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.623 [2024-12-03 11:56:18.229426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.623 [2024-12-03 11:56:18.229435] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.623 [2024-12-03 11:56:18.229444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.239748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.249456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.249499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.249515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.249525] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.249534] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.259810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.269572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.269618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.269634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.269644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.269656] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.279953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.289602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.289640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.289656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.289666] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.289675] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.300061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.309609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.309650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.309667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.309676] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.309685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.320172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.329758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.329799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.329816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.329825] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.329834] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.340003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.349726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.349769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.349785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.349795] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.349804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.360547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.369800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.369846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.882 [2024-12-03 11:56:18.369862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.882 [2024-12-03 11:56:18.369871] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.882 [2024-12-03 11:56:18.369880] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.882 [2024-12-03 11:56:18.380227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.882 qpair failed and we were unable to recover it. 00:28:47.882 [2024-12-03 11:56:18.389786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.882 [2024-12-03 11:56:18.389826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.389842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.389851] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.389860] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.883 [2024-12-03 11:56:18.400330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.883 qpair failed and we were unable to recover it. 00:28:47.883 [2024-12-03 11:56:18.409897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.883 [2024-12-03 11:56:18.409944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.409960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.409969] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.409978] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.883 [2024-12-03 11:56:18.420368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.883 qpair failed and we were unable to recover it. 00:28:47.883 [2024-12-03 11:56:18.429950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.883 [2024-12-03 11:56:18.429988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.430005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.430015] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.430024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.883 [2024-12-03 11:56:18.440549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.883 qpair failed and we were unable to recover it. 00:28:47.883 [2024-12-03 11:56:18.450008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.883 [2024-12-03 11:56:18.450046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.450061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.450074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.450083] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.883 [2024-12-03 11:56:18.460566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.883 qpair failed and we were unable to recover it. 00:28:47.883 [2024-12-03 11:56:18.470151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.883 [2024-12-03 11:56:18.470194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.470211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.470221] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.470230] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:47.883 [2024-12-03 11:56:18.480511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:47.883 qpair failed and we were unable to recover it. 00:28:47.883 [2024-12-03 11:56:18.490264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.883 [2024-12-03 11:56:18.490303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.883 [2024-12-03 11:56:18.490319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.883 [2024-12-03 11:56:18.490328] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.883 [2024-12-03 11:56:18.490337] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.179 [2024-12-03 11:56:18.500574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.179 qpair failed and we were unable to recover it. 00:28:48.179 [2024-12-03 11:56:18.510275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.179 [2024-12-03 11:56:18.510317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.179 [2024-12-03 11:56:18.510334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.179 [2024-12-03 11:56:18.510343] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.179 [2024-12-03 11:56:18.510352] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.179 [2024-12-03 11:56:18.520758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.179 qpair failed and we were unable to recover it. 00:28:48.179 [2024-12-03 11:56:18.530172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.179 [2024-12-03 11:56:18.530213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.179 [2024-12-03 11:56:18.530231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.179 [2024-12-03 11:56:18.530240] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.179 [2024-12-03 11:56:18.530249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.179 [2024-12-03 11:56:18.540845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.179 qpair failed and we were unable to recover it. 00:28:48.179 [2024-12-03 11:56:18.550383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.179 [2024-12-03 11:56:18.550430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.179 [2024-12-03 11:56:18.550449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.179 [2024-12-03 11:56:18.550459] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.179 [2024-12-03 11:56:18.550468] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.179 [2024-12-03 11:56:18.560897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.179 qpair failed and we were unable to recover it. 00:28:48.179 [2024-12-03 11:56:18.570423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.179 [2024-12-03 11:56:18.570467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.179 [2024-12-03 11:56:18.570484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.179 [2024-12-03 11:56:18.570494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.570502] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.580799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.590362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.590403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.590419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.590428] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.590437] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.600916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.610494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.610537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.610553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.610563] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.610572] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.620929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.630632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.630673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.630694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.630704] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.630712] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.641063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.650659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.650705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.650721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.650731] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.650739] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.661042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.670747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.670788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.670804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.670813] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.670822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.681241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.690773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.690815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.690832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.690841] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.690850] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.701088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.710773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.710813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.710830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.710839] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.710848] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.721354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.730836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.730877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.730894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.730903] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.730912] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.741189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.180 [2024-12-03 11:56:18.750843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.180 [2024-12-03 11:56:18.750887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.180 [2024-12-03 11:56:18.750903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.180 [2024-12-03 11:56:18.750913] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.180 [2024-12-03 11:56:18.750922] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.180 [2024-12-03 11:56:18.761454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.180 qpair failed and we were unable to recover it. 00:28:48.480 [2024-12-03 11:56:18.770868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.770910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.770927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.770936] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.770944] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.781309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.791003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.791047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.791063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.791073] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.791081] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.801532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.811062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.811114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.811131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.811140] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.811149] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.821358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.831085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.831131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.831149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.831159] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.831167] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.841677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.851173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.851211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.851228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.851237] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.851246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.861701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.871233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.871273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.871289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.871299] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.871307] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.881801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.891304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.891345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.891362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.891374] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.891383] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.901818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.911500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.911540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.911556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.911565] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.911574] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.921794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.931522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.931561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.931578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.931588] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.931596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.941929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.951530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.951574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.951590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.951600] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.951608] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.961957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.971573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.971620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.971637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.971647] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.971656] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:18.982044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:18.991668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:18.991711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:18.991728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:18.991737] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:18.991745] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:19.002512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.481 [2024-12-03 11:56:19.011677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.481 [2024-12-03 11:56:19.011723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.481 [2024-12-03 11:56:19.011741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.481 [2024-12-03 11:56:19.011751] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.481 [2024-12-03 11:56:19.011759] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.481 [2024-12-03 11:56:19.021946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.481 qpair failed and we were unable to recover it. 00:28:48.482 [2024-12-03 11:56:19.031786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.482 [2024-12-03 11:56:19.031828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.482 [2024-12-03 11:56:19.031844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.482 [2024-12-03 11:56:19.031854] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.482 [2024-12-03 11:56:19.031862] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.482 [2024-12-03 11:56:19.042290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.482 qpair failed and we were unable to recover it. 00:28:48.482 [2024-12-03 11:56:19.051865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.482 [2024-12-03 11:56:19.051908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.482 [2024-12-03 11:56:19.051924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.482 [2024-12-03 11:56:19.051934] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.482 [2024-12-03 11:56:19.051942] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.482 [2024-12-03 11:56:19.062144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.482 qpair failed and we were unable to recover it. 00:28:48.482 [2024-12-03 11:56:19.071917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.482 [2024-12-03 11:56:19.071954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.482 [2024-12-03 11:56:19.071974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.482 [2024-12-03 11:56:19.071983] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.482 [2024-12-03 11:56:19.071992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.482 [2024-12-03 11:56:19.082466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.482 qpair failed and we were unable to recover it. 00:28:48.482 [2024-12-03 11:56:19.091937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.482 [2024-12-03 11:56:19.091974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.482 [2024-12-03 11:56:19.091990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.482 [2024-12-03 11:56:19.091999] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.482 [2024-12-03 11:56:19.092008] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.102439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.111941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.111980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.111997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.112006] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.112015] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.122478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.132011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.132050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.132067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.132076] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.132084] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.142468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.152063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.152105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.152126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.152136] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.152144] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.162429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.172157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.172199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.172216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.172226] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.172235] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.182611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.192388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.192432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.192449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.192458] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.192467] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.202607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.212355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.212399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.212416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.212426] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.212434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.222552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.232382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.232418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.232436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.232445] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.232454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.242643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.252437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.252486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.252503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.252513] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.252521] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.262758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.741 [2024-12-03 11:56:19.272555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.741 [2024-12-03 11:56:19.272596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.741 [2024-12-03 11:56:19.272613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.741 [2024-12-03 11:56:19.272622] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.741 [2024-12-03 11:56:19.272631] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.741 [2024-12-03 11:56:19.282903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.741 qpair failed and we were unable to recover it. 00:28:48.742 [2024-12-03 11:56:19.292578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.742 [2024-12-03 11:56:19.292628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.742 [2024-12-03 11:56:19.292644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.742 [2024-12-03 11:56:19.292654] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.742 [2024-12-03 11:56:19.292663] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.742 [2024-12-03 11:56:19.302908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.742 qpair failed and we were unable to recover it. 00:28:48.742 [2024-12-03 11:56:19.312597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.742 [2024-12-03 11:56:19.312640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.742 [2024-12-03 11:56:19.312657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.742 [2024-12-03 11:56:19.312667] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.742 [2024-12-03 11:56:19.312676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.742 [2024-12-03 11:56:19.322960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.742 qpair failed and we were unable to recover it. 00:28:48.742 [2024-12-03 11:56:19.332681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.742 [2024-12-03 11:56:19.332720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.742 [2024-12-03 11:56:19.332737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.742 [2024-12-03 11:56:19.332747] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.742 [2024-12-03 11:56:19.332760] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.742 [2024-12-03 11:56:19.342988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.742 qpair failed and we were unable to recover it. 00:28:48.742 [2024-12-03 11:56:19.352703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.742 [2024-12-03 11:56:19.352744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.742 [2024-12-03 11:56:19.352762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.742 [2024-12-03 11:56:19.352772] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.742 [2024-12-03 11:56:19.352781] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.363098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.372698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.372740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.372757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.372767] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.372776] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.383198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.392808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.392851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.392868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.392878] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.392887] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.403319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.412813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.412857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.412874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.412883] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.412892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.423289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.432926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.432967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.432984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.432994] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.433003] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.443412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.452979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.453025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.453042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.453052] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.453060] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.463255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.473123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.473166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.473182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.473191] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.473200] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.483374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.493141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.001 [2024-12-03 11:56:19.493185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.001 [2024-12-03 11:56:19.493201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.001 [2024-12-03 11:56:19.493211] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.001 [2024-12-03 11:56:19.493219] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.001 [2024-12-03 11:56:19.503492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.001 qpair failed and we were unable to recover it. 00:28:49.001 [2024-12-03 11:56:19.513168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.513210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.513229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.513239] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.513248] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.002 [2024-12-03 11:56:19.523646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.002 qpair failed and we were unable to recover it. 00:28:49.002 [2024-12-03 11:56:19.533316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.533362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.533380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.533389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.533398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.002 [2024-12-03 11:56:19.543549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.002 qpair failed and we were unable to recover it. 00:28:49.002 [2024-12-03 11:56:19.553315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.553359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.553378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.553387] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.553396] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.002 [2024-12-03 11:56:19.563807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.002 qpair failed and we were unable to recover it. 00:28:49.002 [2024-12-03 11:56:19.573409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.573451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.573468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.573478] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.573487] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.002 [2024-12-03 11:56:19.583889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.002 qpair failed and we were unable to recover it. 00:28:49.002 [2024-12-03 11:56:19.593382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.593426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.593442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.593452] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.593460] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.002 [2024-12-03 11:56:19.603693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.002 qpair failed and we were unable to recover it. 00:28:49.002 [2024-12-03 11:56:19.613368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.002 [2024-12-03 11:56:19.613409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.002 [2024-12-03 11:56:19.613427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.002 [2024-12-03 11:56:19.613436] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.002 [2024-12-03 11:56:19.613446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.623824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.633537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.633578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.633596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.633605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.633614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.644492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.653527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.653566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.653583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.653592] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.653601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.664016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.673658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.673701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.673718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.673728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.673736] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.684132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.693708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.693748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.693768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.693778] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.693786] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.703946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.713720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.713755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.713771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.713781] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.713790] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.724260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.733824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.733866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.733884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.733893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.733903] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.744200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.753820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.753864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.753881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.753890] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.753899] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.764297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.773933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.773974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.262 [2024-12-03 11:56:19.773990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.262 [2024-12-03 11:56:19.774000] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.262 [2024-12-03 11:56:19.774012] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.262 [2024-12-03 11:56:19.784358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.262 qpair failed and we were unable to recover it. 00:28:49.262 [2024-12-03 11:56:19.793943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.262 [2024-12-03 11:56:19.793980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.263 [2024-12-03 11:56:19.793997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.263 [2024-12-03 11:56:19.794007] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.263 [2024-12-03 11:56:19.794015] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.263 [2024-12-03 11:56:19.804479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.263 qpair failed and we were unable to recover it. 00:28:49.263 [2024-12-03 11:56:19.814023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.263 [2024-12-03 11:56:19.814067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.263 [2024-12-03 11:56:19.814084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.263 [2024-12-03 11:56:19.814094] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.263 [2024-12-03 11:56:19.814103] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.263 [2024-12-03 11:56:19.824466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.263 qpair failed and we were unable to recover it. 00:28:49.263 [2024-12-03 11:56:19.834073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.263 [2024-12-03 11:56:19.834122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.263 [2024-12-03 11:56:19.834140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.263 [2024-12-03 11:56:19.834150] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.263 [2024-12-03 11:56:19.834159] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.263 [2024-12-03 11:56:19.844521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.263 qpair failed and we were unable to recover it. 00:28:49.263 [2024-12-03 11:56:19.854180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.263 [2024-12-03 11:56:19.854222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.263 [2024-12-03 11:56:19.854239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.263 [2024-12-03 11:56:19.854249] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.263 [2024-12-03 11:56:19.854258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.263 [2024-12-03 11:56:19.864623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.263 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.874297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.874338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.874354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.874364] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.874373] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.884821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.894358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.894392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.894409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.894419] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.894427] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.904798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.914398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.914441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.914457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.914466] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.914475] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.924698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.934448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.934490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.934508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.934518] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.934527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.944924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.954527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.954565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.954581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.954594] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.954603] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.964737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.974534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.974574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.974591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.974600] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.974609] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:19.984942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:19.994487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:19.994530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:19.994546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:19.994556] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:19.994565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:20.005269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:20.014513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.522 [2024-12-03 11:56:20.014558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.522 [2024-12-03 11:56:20.014575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.522 [2024-12-03 11:56:20.014585] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.522 [2024-12-03 11:56:20.014594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.522 [2024-12-03 11:56:20.025065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.522 qpair failed and we were unable to recover it. 00:28:49.522 [2024-12-03 11:56:20.034652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.523 [2024-12-03 11:56:20.034692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.523 [2024-12-03 11:56:20.034710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.523 [2024-12-03 11:56:20.034720] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.523 [2024-12-03 11:56:20.034729] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.523 [2024-12-03 11:56:20.045270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.523 qpair failed and we were unable to recover it. 00:28:49.523 [2024-12-03 11:56:20.054824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.523 [2024-12-03 11:56:20.054873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.523 [2024-12-03 11:56:20.054890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.523 [2024-12-03 11:56:20.054900] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.523 [2024-12-03 11:56:20.054910] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.523 [2024-12-03 11:56:20.065192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.523 qpair failed and we were unable to recover it. 00:28:49.523 [2024-12-03 11:56:20.074799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.523 [2024-12-03 11:56:20.074839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.523 [2024-12-03 11:56:20.074856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.523 [2024-12-03 11:56:20.074866] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.523 [2024-12-03 11:56:20.074875] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.523 [2024-12-03 11:56:20.085170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.523 qpair failed and we were unable to recover it. 00:28:49.523 [2024-12-03 11:56:20.094862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.523 [2024-12-03 11:56:20.094910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.523 [2024-12-03 11:56:20.094927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.523 [2024-12-03 11:56:20.094936] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.523 [2024-12-03 11:56:20.094945] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.523 [2024-12-03 11:56:20.105282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.523 qpair failed and we were unable to recover it. 00:28:49.523 [2024-12-03 11:56:20.114978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.523 [2024-12-03 11:56:20.115015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.523 [2024-12-03 11:56:20.115032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.523 [2024-12-03 11:56:20.115041] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.523 [2024-12-03 11:56:20.115050] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.523 [2024-12-03 11:56:20.125242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.523 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.135011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.135049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.135071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.135081] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.135091] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.145271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.155220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.155265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.155283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.155292] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.155301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.165659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.175145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.175194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.175211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.175221] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.175230] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.185489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.195226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.195270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.195287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.195297] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.195307] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.205637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.215279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.215320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.215337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.215347] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.215359] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.225673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.235319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.235362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.235379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.235388] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.782 [2024-12-03 11:56:20.235398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.782 [2024-12-03 11:56:20.245707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.782 qpair failed and we were unable to recover it. 00:28:49.782 [2024-12-03 11:56:20.255453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.782 [2024-12-03 11:56:20.255501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.782 [2024-12-03 11:56:20.255518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.782 [2024-12-03 11:56:20.255528] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.255537] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.265709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.275414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.275459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.275475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.275485] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.275493] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.286328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.295439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.295474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.295491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.295501] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.295510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.305823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.315447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.315493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.315509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.315519] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.315528] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.325871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.335702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.335746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.335764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.335773] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.335782] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.345903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.355836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.355875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.355892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.355901] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.355910] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.366082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:49.783 [2024-12-03 11:56:20.375797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.783 [2024-12-03 11:56:20.375841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.783 [2024-12-03 11:56:20.375858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.783 [2024-12-03 11:56:20.375867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.783 [2024-12-03 11:56:20.375876] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.783 [2024-12-03 11:56:20.386169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.783 qpair failed and we were unable to recover it. 00:28:50.041 [2024-12-03 11:56:20.395830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.041 [2024-12-03 11:56:20.395875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.041 [2024-12-03 11:56:20.395891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.041 [2024-12-03 11:56:20.395905] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.041 [2024-12-03 11:56:20.395914] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.041 [2024-12-03 11:56:20.406408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.041 qpair failed and we were unable to recover it. 00:28:50.041 [2024-12-03 11:56:20.415814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.041 [2024-12-03 11:56:20.415853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.041 [2024-12-03 11:56:20.415869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.041 [2024-12-03 11:56:20.415878] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.041 [2024-12-03 11:56:20.415887] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.041 [2024-12-03 11:56:20.426202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.041 qpair failed and we were unable to recover it. 00:28:50.041 [2024-12-03 11:56:20.435742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.041 [2024-12-03 11:56:20.435781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.041 [2024-12-03 11:56:20.435799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.041 [2024-12-03 11:56:20.435810] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.041 [2024-12-03 11:56:20.435821] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.041 [2024-12-03 11:56:20.446231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.041 qpair failed and we were unable to recover it. 00:28:50.041 [2024-12-03 11:56:20.455976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.041 [2024-12-03 11:56:20.456020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.041 [2024-12-03 11:56:20.456037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.041 [2024-12-03 11:56:20.456047] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.041 [2024-12-03 11:56:20.456056] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.041 [2024-12-03 11:56:20.466198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.041 qpair failed and we were unable to recover it. 00:28:50.041 [2024-12-03 11:56:20.476184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.041 [2024-12-03 11:56:20.476223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.476240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.476249] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.476258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.486333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.496005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.496045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.496062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.496072] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.496081] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.506399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.516085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.516135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.516153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.516163] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.516172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.526537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.536091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.536134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.536154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.536164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.536173] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.546536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.556249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.556291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.556308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.556318] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.556327] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.566785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.576311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.576356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.576376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.576385] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.576394] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.586648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.596368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.596408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.596425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.596435] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.596444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.606843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.616481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.616522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.616539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.616548] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.616557] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.627011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.042 [2024-12-03 11:56:20.636495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.042 [2024-12-03 11:56:20.636538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.042 [2024-12-03 11:56:20.636555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.042 [2024-12-03 11:56:20.636565] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.042 [2024-12-03 11:56:20.636574] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.042 [2024-12-03 11:56:20.647002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.042 qpair failed and we were unable to recover it. 00:28:50.300 [2024-12-03 11:56:20.656661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.300 [2024-12-03 11:56:20.656710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.300 [2024-12-03 11:56:20.656727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.300 [2024-12-03 11:56:20.656736] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.300 [2024-12-03 11:56:20.656745] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.300 [2024-12-03 11:56:20.666905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.300 qpair failed and we were unable to recover it. 00:28:50.300 [2024-12-03 11:56:20.676686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.300 [2024-12-03 11:56:20.676725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.300 [2024-12-03 11:56:20.676741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.300 [2024-12-03 11:56:20.676751] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.300 [2024-12-03 11:56:20.676760] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.300 [2024-12-03 11:56:20.687181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.300 qpair failed and we were unable to recover it. 00:28:50.300 [2024-12-03 11:56:20.696651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.300 [2024-12-03 11:56:20.696690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.300 [2024-12-03 11:56:20.696705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.300 [2024-12-03 11:56:20.696715] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.300 [2024-12-03 11:56:20.696724] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.300 [2024-12-03 11:56:20.707158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.300 qpair failed and we were unable to recover it. 00:28:50.300 [2024-12-03 11:56:20.716764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.300 [2024-12-03 11:56:20.716806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.300 [2024-12-03 11:56:20.716822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.300 [2024-12-03 11:56:20.716832] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.300 [2024-12-03 11:56:20.716841] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.300 [2024-12-03 11:56:20.727296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.300 qpair failed and we were unable to recover it. 00:28:51.235 Write completed with error (sct=0, sc=8) 00:28:51.235 starting I/O failed 00:28:51.235 Read completed with error (sct=0, sc=8) 00:28:51.235 starting I/O failed 00:28:51.235 Read completed with error (sct=0, sc=8) 00:28:51.235 starting I/O failed 00:28:51.235 Read completed with error (sct=0, sc=8) 00:28:51.235 starting I/O failed 00:28:51.235 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Read completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 Write completed with error (sct=0, sc=8) 00:28:51.236 starting I/O failed 00:28:51.236 [2024-12-03 11:56:21.732471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:51.236 [2024-12-03 11:56:21.739496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.739548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.739567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.739577] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.739587] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:51.236 [2024-12-03 11:56:21.750202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:51.236 qpair failed and we were unable to recover it. 00:28:51.236 [2024-12-03 11:56:21.759810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.759854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.759871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.759881] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.759890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:51.236 [2024-12-03 11:56:21.770394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:51.236 qpair failed and we were unable to recover it. 00:28:51.236 [2024-12-03 11:56:21.779923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.779969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.779991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.780002] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.780011] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:51.236 [2024-12-03 11:56:21.790400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:51.236 qpair failed and we were unable to recover it. 00:28:51.236 [2024-12-03 11:56:21.800020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.800061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.800082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.800093] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.800102] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:51.236 [2024-12-03 11:56:21.810438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:51.236 qpair failed and we were unable to recover it. 00:28:51.236 [2024-12-03 11:56:21.810566] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:51.236 A controller has encountered a failure and is being reset. 00:28:51.236 [2024-12-03 11:56:21.820086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.820136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.820163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.820178] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.820191] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:51.236 [2024-12-03 11:56:21.830515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.236 qpair failed and we were unable to recover it. 00:28:51.236 [2024-12-03 11:56:21.840134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.236 [2024-12-03 11:56:21.840172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.236 [2024-12-03 11:56:21.840190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.236 [2024-12-03 11:56:21.840200] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.236 [2024-12-03 11:56:21.840208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:51.511 [2024-12-03 11:56:21.850634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.511 qpair failed and we were unable to recover it. 00:28:51.511 [2024-12-03 11:56:21.850761] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:51.511 [2024-12-03 11:56:21.884211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.511 Controller properly reset. 00:28:51.511 Initializing NVMe Controllers 00:28:51.511 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.511 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:51.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:51.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:51.511 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:51.511 Initialization complete. Launching workers. 00:28:51.511 Starting thread on core 1 00:28:51.511 Starting thread on core 2 00:28:51.511 Starting thread on core 3 00:28:51.511 Starting thread on core 0 00:28:51.511 11:56:21 -- host/target_disconnect.sh@59 -- # sync 00:28:51.511 00:28:51.511 real 0m12.589s 00:28:51.511 user 0m27.349s 00:28:51.511 sys 0m3.020s 00:28:51.511 11:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:51.511 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.511 ************************************ 00:28:51.511 END TEST nvmf_target_disconnect_tc2 00:28:51.512 ************************************ 00:28:51.512 11:56:21 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:28:51.512 11:56:21 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:28:51.512 11:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:51.512 11:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:51.512 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.512 ************************************ 00:28:51.512 START TEST nvmf_target_disconnect_tc3 00:28:51.512 ************************************ 00:28:51.512 11:56:21 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:28:51.512 11:56:21 -- host/target_disconnect.sh@65 -- # reconnectpid=3901255 00:28:51.512 11:56:21 -- host/target_disconnect.sh@67 -- # sleep 2 00:28:51.512 11:56:21 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:28:51.512 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.411 11:56:23 -- host/target_disconnect.sh@68 -- # kill -9 3900001 00:28:53.411 11:56:23 -- host/target_disconnect.sh@70 -- # sleep 2 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Read completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.788 Write completed with error (sct=0, sc=8) 00:28:54.788 starting I/O failed 00:28:54.789 [2024-12-03 11:56:25.176705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.721 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 3900001 Killed "${NVMF_APP[@]}" "$@" 00:28:55.721 11:56:26 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:28:55.721 11:56:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:55.722 11:56:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:55.722 11:56:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.722 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:28:55.722 11:56:26 -- nvmf/common.sh@469 -- # nvmfpid=3901950 00:28:55.722 11:56:26 -- nvmf/common.sh@470 -- # waitforlisten 3901950 00:28:55.722 11:56:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:55.722 11:56:26 -- common/autotest_common.sh@829 -- # '[' -z 3901950 ']' 00:28:55.722 11:56:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.722 11:56:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.722 11:56:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.722 11:56:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.722 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:28:55.722 [2024-12-03 11:56:26.054284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:55.722 [2024-12-03 11:56:26.054337] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.722 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.722 [2024-12-03 11:56:26.141852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Write completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 Read completed with error (sct=0, sc=8) 00:28:55.722 starting I/O failed 00:28:55.722 [2024-12-03 11:56:26.181758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.722 [2024-12-03 11:56:26.209705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:55.722 [2024-12-03 11:56:26.209814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.722 [2024-12-03 11:56:26.209825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.722 [2024-12-03 11:56:26.209834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.722 [2024-12-03 11:56:26.209953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:55.722 [2024-12-03 11:56:26.210065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:55.722 [2024-12-03 11:56:26.210172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.722 [2024-12-03 11:56:26.210173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.289 11:56:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.289 11:56:26 -- common/autotest_common.sh@862 -- # return 0 00:28:56.289 11:56:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:56.289 11:56:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.289 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 11:56:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.549 11:56:26 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:56.549 11:56:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 Malloc0 00:28:56.549 11:56:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:26 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:56.549 11:56:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:26 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 [2024-12-03 11:56:26.974532] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bed3c0/0x1bf8dc0) succeed. 00:28:56.549 [2024-12-03 11:56:26.984082] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bee9b0/0x1c78e00) succeed. 00:28:56.549 11:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.549 11:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 11:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.549 11:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 11:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:28:56.549 11:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 [2024-12-03 11:56:27.127119] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:56.549 11:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:28:56.549 11:56:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.549 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:28:56.549 11:56:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.549 11:56:27 -- host/target_disconnect.sh@73 -- # wait 3901255 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Read completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 Write completed with error (sct=0, sc=8) 00:28:56.808 starting I/O failed 00:28:56.808 [2024-12-03 11:56:27.186922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.808 [2024-12-03 11:56:27.188613] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:56.808 [2024-12-03 11:56:27.188633] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:56.808 [2024-12-03 11:56:27.188651] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:57.742 [2024-12-03 11:56:28.192547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.742 qpair failed and we were unable to recover it. 00:28:57.742 [2024-12-03 11:56:28.193977] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:57.742 [2024-12-03 11:56:28.193994] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:57.742 [2024-12-03 11:56:28.194002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:58.676 [2024-12-03 11:56:29.197914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.676 qpair failed and we were unable to recover it. 00:28:58.676 [2024-12-03 11:56:29.199432] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:58.676 [2024-12-03 11:56:29.199449] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:58.676 [2024-12-03 11:56:29.199458] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:59.612 [2024-12-03 11:56:30.203452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.612 qpair failed and we were unable to recover it. 00:28:59.612 [2024-12-03 11:56:30.204965] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:59.612 [2024-12-03 11:56:30.204985] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:59.612 [2024-12-03 11:56:30.204994] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:00.988 [2024-12-03 11:56:31.208921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.988 qpair failed and we were unable to recover it. 00:29:00.988 [2024-12-03 11:56:31.210473] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:00.988 [2024-12-03 11:56:31.210490] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:00.988 [2024-12-03 11:56:31.210499] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:01.924 [2024-12-03 11:56:32.214362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-03 11:56:32.215933] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:01.924 [2024-12-03 11:56:32.215951] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:01.924 [2024-12-03 11:56:32.215958] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:02.861 [2024-12-03 11:56:33.219756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.861 qpair failed and we were unable to recover it. 00:29:02.861 [2024-12-03 11:56:33.221129] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:02.861 [2024-12-03 11:56:33.221147] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:02.861 [2024-12-03 11:56:33.221156] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:03.798 [2024-12-03 11:56:34.224906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.798 qpair failed and we were unable to recover it. 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Write completed with error (sct=0, sc=8) 00:29:04.733 starting I/O failed 00:29:04.733 Read completed with error (sct=0, sc=8) 00:29:04.734 starting I/O failed 00:29:04.734 Read completed with error (sct=0, sc=8) 00:29:04.734 starting I/O failed 00:29:04.734 [2024-12-03 11:56:35.229995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.734 [2024-12-03 11:56:35.231709] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.734 [2024-12-03 11:56:35.231727] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.734 [2024-12-03 11:56:35.231735] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:05.666 [2024-12-03 11:56:36.235592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.666 qpair failed and we were unable to recover it. 00:29:05.666 [2024-12-03 11:56:36.237052] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:05.666 [2024-12-03 11:56:36.237069] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:05.666 [2024-12-03 11:56:36.237079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:07.041 [2024-12-03 11:56:37.240980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.041 qpair failed and we were unable to recover it. 00:29:07.041 [2024-12-03 11:56:37.241142] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:07.041 A controller has encountered a failure and is being reset. 00:29:07.041 Resorting to new failover address 192.168.100.9 00:29:07.041 [2024-12-03 11:56:37.241243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.041 [2024-12-03 11:56:37.241315] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:07.041 [2024-12-03 11:56:37.273951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:07.041 Controller properly reset. 00:29:07.041 Initializing NVMe Controllers 00:29:07.041 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.041 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.041 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:07.041 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:07.041 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:07.042 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:07.042 Initialization complete. Launching workers. 00:29:07.042 Starting thread on core 1 00:29:07.042 Starting thread on core 2 00:29:07.042 Starting thread on core 3 00:29:07.042 Starting thread on core 0 00:29:07.042 11:56:37 -- host/target_disconnect.sh@74 -- # sync 00:29:07.042 00:29:07.042 real 0m15.362s 00:29:07.042 user 0m56.452s 00:29:07.042 sys 0m4.639s 00:29:07.042 11:56:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:07.042 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.042 ************************************ 00:29:07.042 END TEST nvmf_target_disconnect_tc3 00:29:07.042 ************************************ 00:29:07.042 11:56:37 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:07.042 11:56:37 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:07.042 11:56:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:07.042 11:56:37 -- nvmf/common.sh@116 -- # sync 00:29:07.042 11:56:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:07.042 11:56:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:07.042 11:56:37 -- nvmf/common.sh@119 -- # set +e 00:29:07.042 11:56:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:07.042 11:56:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:07.042 rmmod nvme_rdma 00:29:07.042 rmmod nvme_fabrics 00:29:07.042 11:56:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:07.042 11:56:37 -- nvmf/common.sh@123 -- # set -e 00:29:07.042 11:56:37 -- nvmf/common.sh@124 -- # return 0 00:29:07.042 11:56:37 -- nvmf/common.sh@477 -- # '[' -n 3901950 ']' 00:29:07.042 11:56:37 -- nvmf/common.sh@478 -- # killprocess 3901950 00:29:07.042 11:56:37 -- common/autotest_common.sh@936 -- # '[' -z 3901950 ']' 00:29:07.042 11:56:37 -- common/autotest_common.sh@940 -- # kill -0 3901950 00:29:07.042 11:56:37 -- common/autotest_common.sh@941 -- # uname 00:29:07.042 11:56:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.042 11:56:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3901950 00:29:07.042 11:56:37 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:07.042 11:56:37 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:07.042 11:56:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3901950' 00:29:07.042 killing process with pid 3901950 00:29:07.042 11:56:37 -- common/autotest_common.sh@955 -- # kill 3901950 00:29:07.042 11:56:37 -- common/autotest_common.sh@960 -- # wait 3901950 00:29:07.301 11:56:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:07.301 11:56:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:07.301 00:29:07.301 real 0m36.388s 00:29:07.301 user 2m12.470s 00:29:07.301 sys 0m13.319s 00:29:07.301 11:56:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:07.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.301 ************************************ 00:29:07.301 END TEST nvmf_target_disconnect 00:29:07.301 ************************************ 00:29:07.301 11:56:37 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:07.301 11:56:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.301 11:56:37 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:07.301 00:29:07.301 real 21m17.116s 00:29:07.301 user 67m50.400s 00:29:07.301 sys 5m0.768s 00:29:07.301 11:56:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:07.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.301 ************************************ 00:29:07.301 END TEST nvmf_rdma 00:29:07.301 ************************************ 00:29:07.561 11:56:37 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:07.561 11:56:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:07.561 11:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.561 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.561 ************************************ 00:29:07.561 START TEST spdkcli_nvmf_rdma 00:29:07.561 ************************************ 00:29:07.561 11:56:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:07.561 * Looking for test storage... 00:29:07.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:07.561 11:56:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:07.561 11:56:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:07.561 11:56:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:07.561 11:56:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:07.561 11:56:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:07.561 11:56:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:07.561 11:56:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:07.561 11:56:38 -- scripts/common.sh@335 -- # IFS=.-: 00:29:07.561 11:56:38 -- scripts/common.sh@335 -- # read -ra ver1 00:29:07.561 11:56:38 -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.561 11:56:38 -- scripts/common.sh@336 -- # read -ra ver2 00:29:07.561 11:56:38 -- scripts/common.sh@337 -- # local 'op=<' 00:29:07.561 11:56:38 -- scripts/common.sh@339 -- # ver1_l=2 00:29:07.561 11:56:38 -- scripts/common.sh@340 -- # ver2_l=1 00:29:07.561 11:56:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:07.561 11:56:38 -- scripts/common.sh@343 -- # case "$op" in 00:29:07.561 11:56:38 -- scripts/common.sh@344 -- # : 1 00:29:07.561 11:56:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:07.561 11:56:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.561 11:56:38 -- scripts/common.sh@364 -- # decimal 1 00:29:07.561 11:56:38 -- scripts/common.sh@352 -- # local d=1 00:29:07.561 11:56:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.561 11:56:38 -- scripts/common.sh@354 -- # echo 1 00:29:07.561 11:56:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:07.561 11:56:38 -- scripts/common.sh@365 -- # decimal 2 00:29:07.561 11:56:38 -- scripts/common.sh@352 -- # local d=2 00:29:07.561 11:56:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.561 11:56:38 -- scripts/common.sh@354 -- # echo 2 00:29:07.561 11:56:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:07.561 11:56:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:07.561 11:56:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:07.561 11:56:38 -- scripts/common.sh@367 -- # return 0 00:29:07.561 11:56:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.561 11:56:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:07.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.561 --rc genhtml_branch_coverage=1 00:29:07.561 --rc genhtml_function_coverage=1 00:29:07.561 --rc genhtml_legend=1 00:29:07.561 --rc geninfo_all_blocks=1 00:29:07.561 --rc geninfo_unexecuted_blocks=1 00:29:07.561 00:29:07.561 ' 00:29:07.561 11:56:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:07.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.561 --rc genhtml_branch_coverage=1 00:29:07.562 --rc genhtml_function_coverage=1 00:29:07.562 --rc genhtml_legend=1 00:29:07.562 --rc geninfo_all_blocks=1 00:29:07.562 --rc geninfo_unexecuted_blocks=1 00:29:07.562 00:29:07.562 ' 00:29:07.562 11:56:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:07.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.562 --rc genhtml_branch_coverage=1 00:29:07.562 --rc genhtml_function_coverage=1 00:29:07.562 --rc genhtml_legend=1 00:29:07.562 --rc geninfo_all_blocks=1 00:29:07.562 --rc geninfo_unexecuted_blocks=1 00:29:07.562 00:29:07.562 ' 00:29:07.562 11:56:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:07.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.562 --rc genhtml_branch_coverage=1 00:29:07.562 --rc genhtml_function_coverage=1 00:29:07.562 --rc genhtml_legend=1 00:29:07.562 --rc geninfo_all_blocks=1 00:29:07.562 --rc geninfo_unexecuted_blocks=1 00:29:07.562 00:29:07.562 ' 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:07.562 11:56:38 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:07.562 11:56:38 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.562 11:56:38 -- nvmf/common.sh@7 -- # uname -s 00:29:07.562 11:56:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.562 11:56:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.562 11:56:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.562 11:56:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.562 11:56:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.562 11:56:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.562 11:56:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.562 11:56:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.562 11:56:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.562 11:56:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.562 11:56:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:07.562 11:56:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:07.562 11:56:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.562 11:56:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.562 11:56:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.562 11:56:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:07.562 11:56:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.562 11:56:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.562 11:56:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.562 11:56:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 11:56:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 11:56:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 11:56:38 -- paths/export.sh@5 -- # export PATH 00:29:07.562 11:56:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 11:56:38 -- nvmf/common.sh@46 -- # : 0 00:29:07.562 11:56:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:07.562 11:56:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:07.562 11:56:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:07.562 11:56:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.562 11:56:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.562 11:56:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:07.562 11:56:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:07.562 11:56:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:07.562 11:56:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.562 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:29:07.562 11:56:38 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:07.562 11:56:38 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3904100 00:29:07.562 11:56:38 -- spdkcli/common.sh@34 -- # waitforlisten 3904100 00:29:07.562 11:56:38 -- common/autotest_common.sh@829 -- # '[' -z 3904100 ']' 00:29:07.562 11:56:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.562 11:56:38 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:07.562 11:56:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.562 11:56:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.562 11:56:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.562 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:29:07.821 [2024-12-03 11:56:38.190622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:07.822 [2024-12-03 11:56:38.190681] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904100 ] 00:29:07.822 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.822 [2024-12-03 11:56:38.259389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.822 [2024-12-03 11:56:38.333527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:07.822 [2024-12-03 11:56:38.333669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.822 [2024-12-03 11:56:38.333671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.389 11:56:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.389 11:56:38 -- common/autotest_common.sh@862 -- # return 0 00:29:08.389 11:56:38 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:08.389 11:56:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.389 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.648 11:56:39 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:08.648 11:56:39 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:08.648 11:56:39 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:08.648 11:56:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:08.648 11:56:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.648 11:56:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:08.648 11:56:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:08.648 11:56:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:08.648 11:56:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.648 11:56:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:08.648 11:56:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.648 11:56:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:08.648 11:56:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:08.648 11:56:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:08.648 11:56:39 -- common/autotest_common.sh@10 -- # set +x 00:29:15.206 11:56:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:15.206 11:56:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:15.206 11:56:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:15.206 11:56:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:15.206 11:56:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:15.206 11:56:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:15.206 11:56:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:15.206 11:56:45 -- nvmf/common.sh@294 -- # net_devs=() 00:29:15.206 11:56:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:15.206 11:56:45 -- nvmf/common.sh@295 -- # e810=() 00:29:15.206 11:56:45 -- nvmf/common.sh@295 -- # local -ga e810 00:29:15.206 11:56:45 -- nvmf/common.sh@296 -- # x722=() 00:29:15.206 11:56:45 -- nvmf/common.sh@296 -- # local -ga x722 00:29:15.206 11:56:45 -- nvmf/common.sh@297 -- # mlx=() 00:29:15.206 11:56:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:15.206 11:56:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.206 11:56:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:15.206 11:56:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:15.206 11:56:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:15.206 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:15.206 11:56:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.206 11:56:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:15.206 11:56:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:15.206 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:15.206 11:56:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.206 11:56:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:15.206 11:56:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:15.206 11:56:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.206 11:56:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:15.206 11:56:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.206 11:56:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:15.206 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:15.206 11:56:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:15.206 11:56:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.206 11:56:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:15.206 11:56:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.206 11:56:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:15.206 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:15.206 11:56:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.206 11:56:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:15.206 11:56:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:15.206 11:56:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:15.206 11:56:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:15.206 11:56:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:15.206 11:56:45 -- nvmf/common.sh@57 -- # uname 00:29:15.206 11:56:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:15.206 11:56:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:15.206 11:56:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:15.206 11:56:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:15.206 11:56:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:15.206 11:56:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:15.206 11:56:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:15.207 11:56:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:15.207 11:56:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:15.207 11:56:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:15.207 11:56:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:15.207 11:56:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:15.207 11:56:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:15.207 11:56:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:15.207 11:56:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:15.464 11:56:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:15.464 11:56:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:15.464 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.464 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:15.464 11:56:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:15.464 11:56:45 -- nvmf/common.sh@104 -- # continue 2 00:29:15.464 11:56:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@104 -- # continue 2 00:29:15.465 11:56:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:15.465 11:56:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:15.465 11:56:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:15.465 11:56:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:15.465 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.465 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:15.465 altname enp217s0f0np0 00:29:15.465 altname ens818f0np0 00:29:15.465 inet 192.168.100.8/24 scope global mlx_0_0 00:29:15.465 valid_lft forever preferred_lft forever 00:29:15.465 11:56:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:15.465 11:56:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:15.465 11:56:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:15.465 11:56:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:15.465 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.465 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:15.465 altname enp217s0f1np1 00:29:15.465 altname ens818f1np1 00:29:15.465 inet 192.168.100.9/24 scope global mlx_0_1 00:29:15.465 valid_lft forever preferred_lft forever 00:29:15.465 11:56:45 -- nvmf/common.sh@410 -- # return 0 00:29:15.465 11:56:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:15.465 11:56:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:15.465 11:56:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:15.465 11:56:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:15.465 11:56:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:15.465 11:56:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:15.465 11:56:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:15.465 11:56:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:15.465 11:56:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:15.465 11:56:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@104 -- # continue 2 00:29:15.465 11:56:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.465 11:56:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:15.465 11:56:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@104 -- # continue 2 00:29:15.465 11:56:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:15.465 11:56:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:15.465 11:56:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:15.465 11:56:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:15.465 11:56:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:15.465 11:56:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:15.465 192.168.100.9' 00:29:15.465 11:56:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:15.465 192.168.100.9' 00:29:15.465 11:56:45 -- nvmf/common.sh@445 -- # head -n 1 00:29:15.465 11:56:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:15.465 11:56:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:15.465 192.168.100.9' 00:29:15.465 11:56:45 -- nvmf/common.sh@446 -- # tail -n +2 00:29:15.465 11:56:45 -- nvmf/common.sh@446 -- # head -n 1 00:29:15.465 11:56:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:15.465 11:56:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:15.465 11:56:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:15.465 11:56:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:15.465 11:56:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:15.465 11:56:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:15.465 11:56:45 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:15.465 11:56:45 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:15.465 11:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.465 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:29:15.465 11:56:45 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:15.465 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:15.465 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:15.465 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:15.465 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:15.465 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:15.465 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:15.465 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:15.465 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:15.465 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:15.465 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:15.465 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:15.465 ' 00:29:15.723 [2024-12-03 11:56:46.336299] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:18.325 [2024-12-03 11:56:48.415312] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x201b6e0/0x201da00) succeed. 00:29:18.325 [2024-12-03 11:56:48.425206] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x201cdc0/0x205f0a0) succeed. 00:29:19.262 [2024-12-03 11:56:49.694730] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:21.796 [2024-12-03 11:56:51.938030] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:23.700 [2024-12-03 11:56:53.868549] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:25.078 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:25.078 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:25.078 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:25.078 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:25.078 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:25.078 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:25.079 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:25.079 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:25.079 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:25.079 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:25.079 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:25.079 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:25.079 11:56:55 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:25.079 11:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.079 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.079 11:56:55 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:25.079 11:56:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.079 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.079 11:56:55 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:25.079 11:56:55 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:25.337 11:56:55 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:25.337 11:56:55 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:25.337 11:56:55 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:25.337 11:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.337 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.596 11:56:55 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:25.596 11:56:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.596 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.596 11:56:55 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:25.596 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:25.596 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:25.596 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:25.596 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:25.596 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:25.596 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:25.596 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:25.596 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:25.596 ' 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:30.868 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:30.868 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:30.868 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:30.868 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:30.868 11:57:01 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:30.868 11:57:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:30.868 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:30.868 11:57:01 -- spdkcli/nvmf.sh@90 -- # killprocess 3904100 00:29:30.868 11:57:01 -- common/autotest_common.sh@936 -- # '[' -z 3904100 ']' 00:29:30.868 11:57:01 -- common/autotest_common.sh@940 -- # kill -0 3904100 00:29:30.868 11:57:01 -- common/autotest_common.sh@941 -- # uname 00:29:30.868 11:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:30.868 11:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3904100 00:29:30.869 11:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:30.869 11:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:30.869 11:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3904100' 00:29:30.869 killing process with pid 3904100 00:29:31.127 11:57:01 -- common/autotest_common.sh@955 -- # kill 3904100 00:29:31.127 [2024-12-03 11:57:01.481792] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:31.127 11:57:01 -- common/autotest_common.sh@960 -- # wait 3904100 00:29:31.127 11:57:01 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:31.127 11:57:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:31.127 11:57:01 -- nvmf/common.sh@116 -- # sync 00:29:31.127 11:57:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:31.127 11:57:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:31.127 11:57:01 -- nvmf/common.sh@119 -- # set +e 00:29:31.386 11:57:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:31.386 11:57:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:31.386 rmmod nvme_rdma 00:29:31.386 rmmod nvme_fabrics 00:29:31.386 11:57:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:31.386 11:57:01 -- nvmf/common.sh@123 -- # set -e 00:29:31.386 11:57:01 -- nvmf/common.sh@124 -- # return 0 00:29:31.386 11:57:01 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:31.386 11:57:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:31.386 11:57:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:31.386 00:29:31.386 real 0m23.862s 00:29:31.386 user 0m51.435s 00:29:31.386 sys 0m6.139s 00:29:31.386 11:57:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:31.386 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.386 ************************************ 00:29:31.386 END TEST spdkcli_nvmf_rdma 00:29:31.386 ************************************ 00:29:31.386 11:57:01 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:31.386 11:57:01 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:31.386 11:57:01 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:31.386 11:57:01 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:31.386 11:57:01 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:31.386 11:57:01 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:31.386 11:57:01 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:31.386 11:57:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:31.386 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.386 11:57:01 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:31.386 11:57:01 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:31.386 11:57:01 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:31.386 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:29:37.952 INFO: APP EXITING 00:29:37.952 INFO: killing all VMs 00:29:37.952 INFO: killing vhost app 00:29:37.952 INFO: EXIT DONE 00:29:40.510 Waiting for block devices as requested 00:29:40.510 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:40.510 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:40.510 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:40.510 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:40.770 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:40.770 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:40.770 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.030 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:41.030 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:41.030 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:41.290 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:41.290 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:41.290 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:41.549 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:41.549 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.549 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:41.809 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:45.095 Cleaning 00:29:45.095 Removing: /var/run/dpdk/spdk0/config 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:45.095 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:45.095 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:45.095 Removing: /var/run/dpdk/spdk1/config 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:45.095 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:45.095 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:45.095 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:45.095 Removing: /var/run/dpdk/spdk2/config 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:45.095 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:45.095 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:45.095 Removing: /var/run/dpdk/spdk3/config 00:29:45.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:45.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:45.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:45.096 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:45.096 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:45.096 Removing: /var/run/dpdk/spdk4/config 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:45.353 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:45.353 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:45.353 Removing: /dev/shm/bdevperf_trace.pid3733152 00:29:45.353 Removing: /dev/shm/bdevperf_trace.pid3828251 00:29:45.353 Removing: /dev/shm/bdev_svc_trace.1 00:29:45.353 Removing: /dev/shm/nvmf_trace.0 00:29:45.353 Removing: /dev/shm/spdk_tgt_trace.pid3568107 00:29:45.353 Removing: /var/run/dpdk/spdk0 00:29:45.353 Removing: /var/run/dpdk/spdk1 00:29:45.353 Removing: /var/run/dpdk/spdk2 00:29:45.353 Removing: /var/run/dpdk/spdk3 00:29:45.353 Removing: /var/run/dpdk/spdk4 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3565446 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3566723 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3568107 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3568751 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3574387 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3575881 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3576209 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3576536 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3576939 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3577309 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3577511 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3577793 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3578115 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3578978 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3582179 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3582484 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3582791 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3583053 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3583626 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3583840 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3584407 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3584485 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3584786 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3585054 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3585249 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3585365 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3585996 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3586191 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3586488 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3586694 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3586925 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3587006 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3587278 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3587561 00:29:45.353 Removing: /var/run/dpdk/spdk_pid3587831 00:29:45.354 Removing: /var/run/dpdk/spdk_pid3588041 00:29:45.354 Removing: /var/run/dpdk/spdk_pid3588219 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3588432 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3588699 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3588985 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3589253 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3589540 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3589808 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3590076 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3590268 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3590497 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3590680 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3590962 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3591230 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3591516 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3591789 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3592071 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3592337 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3592565 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3592750 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3592952 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3593205 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3593488 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3593755 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3594048 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3594318 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3594601 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3594850 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3595111 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3595302 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3595525 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3595749 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3596040 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3596313 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3596594 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3596868 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3597150 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3597232 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3597622 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3601801 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3699293 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3703495 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3714058 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3719428 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3723118 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3723925 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3733152 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3733443 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3738300 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3744229 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3746987 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3757278 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3782112 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3785831 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3791485 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3825656 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3827094 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3828251 00:29:45.612 Removing: /var/run/dpdk/spdk_pid3832649 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3839855 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3840726 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3841750 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3842590 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3843102 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3847646 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3847653 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3852227 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3852768 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3853340 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3854116 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3854129 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3856566 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3858467 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3860361 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3862325 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3864224 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3866182 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3872963 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3873626 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3875939 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3877033 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3884201 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3886965 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3892707 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3892983 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3898883 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3899215 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3901255 00:29:45.870 Removing: /var/run/dpdk/spdk_pid3904100 00:29:45.870 Clean 00:29:45.870 killing process with pid 3516632 00:30:03.953 killing process with pid 3516629 00:30:03.953 killing process with pid 3516631 00:30:03.953 killing process with pid 3516630 00:30:03.953 11:57:32 -- common/autotest_common.sh@1446 -- # return 0 00:30:03.953 11:57:32 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:03.953 11:57:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.953 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.953 11:57:32 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:03.953 11:57:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.954 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.954 11:57:32 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:03.954 11:57:32 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:03.954 11:57:32 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:03.954 11:57:32 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:03.954 11:57:32 -- spdk/autotest.sh@383 -- # hostname 00:30:03.954 11:57:32 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:03.954 geninfo: WARNING: invalid characters removed from testname! 00:30:22.136 11:57:51 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:23.072 11:57:53 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:24.446 11:57:54 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:25.827 11:57:56 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:27.730 11:57:57 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:29.107 11:57:59 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:31.010 11:58:01 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:31.010 11:58:01 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:31.010 11:58:01 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:31.010 11:58:01 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:31.010 11:58:01 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:31.010 11:58:01 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:31.010 11:58:01 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:31.010 11:58:01 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:31.010 11:58:01 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:31.010 11:58:01 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:31.010 11:58:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:31.010 11:58:01 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:31.010 11:58:01 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:31.010 11:58:01 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:31.010 11:58:01 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:31.010 11:58:01 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:31.010 11:58:01 -- scripts/common.sh@343 -- $ case "$op" in 00:30:31.010 11:58:01 -- scripts/common.sh@344 -- $ : 1 00:30:31.010 11:58:01 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:31.010 11:58:01 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.010 11:58:01 -- scripts/common.sh@364 -- $ decimal 1 00:30:31.010 11:58:01 -- scripts/common.sh@352 -- $ local d=1 00:30:31.010 11:58:01 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:31.010 11:58:01 -- scripts/common.sh@354 -- $ echo 1 00:30:31.010 11:58:01 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:31.010 11:58:01 -- scripts/common.sh@365 -- $ decimal 2 00:30:31.010 11:58:01 -- scripts/common.sh@352 -- $ local d=2 00:30:31.010 11:58:01 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:31.010 11:58:01 -- scripts/common.sh@354 -- $ echo 2 00:30:31.010 11:58:01 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:31.010 11:58:01 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:31.010 11:58:01 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:31.010 11:58:01 -- scripts/common.sh@367 -- $ return 0 00:30:31.010 11:58:01 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.010 11:58:01 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.010 --rc genhtml_branch_coverage=1 00:30:31.010 --rc genhtml_function_coverage=1 00:30:31.010 --rc genhtml_legend=1 00:30:31.010 --rc geninfo_all_blocks=1 00:30:31.010 --rc geninfo_unexecuted_blocks=1 00:30:31.010 00:30:31.010 ' 00:30:31.010 11:58:01 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.010 --rc genhtml_branch_coverage=1 00:30:31.010 --rc genhtml_function_coverage=1 00:30:31.010 --rc genhtml_legend=1 00:30:31.010 --rc geninfo_all_blocks=1 00:30:31.010 --rc geninfo_unexecuted_blocks=1 00:30:31.010 00:30:31.010 ' 00:30:31.010 11:58:01 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:31.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.010 --rc genhtml_branch_coverage=1 00:30:31.010 --rc genhtml_function_coverage=1 00:30:31.010 --rc genhtml_legend=1 00:30:31.010 --rc geninfo_all_blocks=1 00:30:31.010 --rc geninfo_unexecuted_blocks=1 00:30:31.010 00:30:31.010 ' 00:30:31.010 11:58:01 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:31.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.011 --rc genhtml_branch_coverage=1 00:30:31.011 --rc genhtml_function_coverage=1 00:30:31.011 --rc genhtml_legend=1 00:30:31.011 --rc geninfo_all_blocks=1 00:30:31.011 --rc geninfo_unexecuted_blocks=1 00:30:31.011 00:30:31.011 ' 00:30:31.011 11:58:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:31.011 11:58:01 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:31.011 11:58:01 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.011 11:58:01 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.011 11:58:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.011 11:58:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.011 11:58:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.011 11:58:01 -- paths/export.sh@5 -- $ export PATH 00:30:31.011 11:58:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.011 11:58:01 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:31.011 11:58:01 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:31.011 11:58:01 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733223481.XXXXXX 00:30:31.011 11:58:01 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733223481.bhezDw 00:30:31.011 11:58:01 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:31.011 11:58:01 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:30:31.011 11:58:01 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:30:31.011 11:58:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:31.011 11:58:01 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:31.011 11:58:01 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:31.011 11:58:01 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:31.011 11:58:01 -- common/autotest_common.sh@10 -- $ set +x 00:30:31.011 11:58:01 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:31.011 11:58:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:31.011 11:58:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:31.011 11:58:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:31.011 11:58:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:31.011 11:58:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:31.011 11:58:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:31.011 11:58:01 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:31.011 11:58:01 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:31.011 11:58:01 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:31.011 11:58:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:31.011 + [[ -n 3473751 ]] 00:30:31.011 + sudo kill 3473751 00:30:31.021 [Pipeline] } 00:30:31.040 [Pipeline] // stage 00:30:31.045 [Pipeline] } 00:30:31.062 [Pipeline] // timeout 00:30:31.067 [Pipeline] } 00:30:31.084 [Pipeline] // catchError 00:30:31.089 [Pipeline] } 00:30:31.106 [Pipeline] // wrap 00:30:31.112 [Pipeline] } 00:30:31.128 [Pipeline] // catchError 00:30:31.138 [Pipeline] stage 00:30:31.140 [Pipeline] { (Epilogue) 00:30:31.155 [Pipeline] catchError 00:30:31.157 [Pipeline] { 00:30:31.172 [Pipeline] echo 00:30:31.174 Cleanup processes 00:30:31.181 [Pipeline] sh 00:30:31.469 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:31.469 3925929 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:31.484 [Pipeline] sh 00:30:31.772 ++ grep -v 'sudo pgrep' 00:30:31.773 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:31.773 ++ awk '{print $1}' 00:30:31.773 + sudo kill -9 00:30:31.773 + true 00:30:31.785 [Pipeline] sh 00:30:32.075 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:32.075 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:38.655 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:41.955 [Pipeline] sh 00:30:42.240 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:42.241 Artifacts sizes are good 00:30:42.254 [Pipeline] archiveArtifacts 00:30:42.260 Archiving artifacts 00:30:42.424 [Pipeline] sh 00:30:42.743 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:30:42.757 [Pipeline] cleanWs 00:30:42.767 [WS-CLEANUP] Deleting project workspace... 00:30:42.767 [WS-CLEANUP] Deferred wipeout is used... 00:30:42.773 [WS-CLEANUP] done 00:30:42.775 [Pipeline] } 00:30:42.792 [Pipeline] // catchError 00:30:42.805 [Pipeline] sh 00:30:43.088 + logger -p user.info -t JENKINS-CI 00:30:43.097 [Pipeline] } 00:30:43.111 [Pipeline] // stage 00:30:43.117 [Pipeline] } 00:30:43.131 [Pipeline] // node 00:30:43.137 [Pipeline] End of Pipeline 00:30:43.173 Finished: SUCCESS